Science.gov

Sample records for acenocoumarol dosing algorithm

  1. Pharmacogenetic-guided dosing of coumarin anticoagulants: algorithms for warfarin, acenocoumarol and phenprocoumon.

    PubMed

    Verhoef, Talitha I; Redekop, William K; Daly, Ann K; van Schie, Rianne M F; de Boer, Anthonius; Maitland-van der Zee, Anke-Hilse

    2014-04-01

    Coumarin derivatives, such as warfarin, acenocoumarol and phenprocoumon are frequently prescribed oral anticoagulants to treat and prevent thromboembolism. Because there is a large inter-individual and intra-individual variability in dose-response and a small therapeutic window, treatment with coumarin derivatives is challenging. Certain polymorphisms in CYP2C9 and VKORC1 are associated with lower dose requirements and a higher risk of bleeding. In this review we describe the use of different coumarin derivatives, pharmacokinetic characteristics of these drugs and differences amongst the coumarins. We also describe the current clinical challenges and the role of pharmacogenetic factors. These genetic factors are used to develop dosing algorithms and can be used to predict the right coumarin dose. The effectiveness of this new dosing strategy is currently being investigated in clinical trials. PMID:23919835

  2. Pharmacogenetic-guided dosing of coumarin anticoagulants: algorithms for warfarin, acenocoumarol and phenprocoumon.

    PubMed

    Verhoef, Talitha I; Redekop, William K; Daly, Ann K; van Schie, Rianne M F; de Boer, Anthonius; Maitland-van der Zee, Anke-Hilse

    2014-04-01

    Coumarin derivatives, such as warfarin, acenocoumarol and phenprocoumon are frequently prescribed oral anticoagulants to treat and prevent thromboembolism. Because there is a large inter-individual and intra-individual variability in dose-response and a small therapeutic window, treatment with coumarin derivatives is challenging. Certain polymorphisms in CYP2C9 and VKORC1 are associated with lower dose requirements and a higher risk of bleeding. In this review we describe the use of different coumarin derivatives, pharmacokinetic characteristics of these drugs and differences amongst the coumarins. We also describe the current clinical challenges and the role of pharmacogenetic factors. These genetic factors are used to develop dosing algorithms and can be used to predict the right coumarin dose. The effectiveness of this new dosing strategy is currently being investigated in clinical trials.

  3. Efficiency and effectiveness of the use of an acenocoumarol pharmacogenetic dosing algorithm versus usual care in patients with venous thromboembolic disease initiating oral anticoagulation: study protocol for a randomized controlled trial

    PubMed Central

    2012-01-01

    Background Hemorrhagic events are frequent in patients on treatment with antivitamin-K oral anticoagulants due to their narrow therapeutic margin. Studies performed with acenocoumarol have shown the relationship between demographic, clinical and genotypic variants and the response to these drugs. Once the influence of these genetic and clinical factors on the dose of acenocoumarol needed to maintain a stable international normalized ratio (INR) has been demonstrated, new strategies need to be developed to predict the appropriate doses of this drug. Several pharmacogenetic algorithms have been developed for warfarin, but only three have been developed for acenocoumarol. After the development of a pharmacogenetic algorithm, the obvious next step is to demonstrate its effectiveness and utility by means of a randomized controlled trial. The aim of this study is to evaluate the effectiveness and efficiency of an acenocoumarol dosing algorithm developed by our group which includes demographic, clinical and pharmacogenetic variables (VKORC1, CYP2C9, CYP4F2 and ApoE) in patients with venous thromboembolism (VTE). Methods and design This is a multicenter, single blind, randomized controlled clinical trial. The protocol has been approved by La Paz University Hospital Research Ethics Committee and by the Spanish Drug Agency. Two hundred and forty patients with VTE in which oral anticoagulant therapy is indicated will be included. Randomization (case/control 1:1) will be stratified by center. Acenocoumarol dose in the control group will be scheduled and adjusted following common clinical practice; in the experimental arm dosing will be following an individualized algorithm developed and validated by our group. Patients will be followed for three months. The main endpoints are: 1) Percentage of patients with INR within the therapeutic range on day seven after initiation of oral anticoagulant therapy; 2) Time from the start of oral anticoagulant treatment to achievement of a

  4. A New Pharmacogenetic Algorithm to Predict the Most Appropriate Dosage of Acenocoumarol for Stable Anticoagulation in a Mixed Spanish Population.

    PubMed

    Tong, Hoi Y; Dávila-Fajardo, Cristina Lucía; Borobia, Alberto M; Martínez-González, Luis Javier; Lubomirov, Rubin; Perea León, Laura María; Blanco Bañares, María J; Díaz-Villamarín, Xando; Fernández-Capitán, Carmen; Cabeza Barrera, José; Carcas, Antonio J

    2016-01-01

    There is a strong association between genetic polymorphisms and the acenocoumarol dosage requirements. Genotyping the polymorphisms involved in the pharmacokinetics and pharmacodynamics of acenocoumarol before starting anticoagulant therapy would result in a better quality of life and a more efficient use of healthcare resources. The objective of this study is to develop a new algorithm that includes clinical and genetic variables to predict the most appropriate acenocoumarol dosage for stable anticoagulation in a wide range of patients. We recruited 685 patients from 2 Spanish hospitals and 1 primary healthcare center. We randomly chose 80% of the patients (n = 556), considering an equitable distribution of genotypes to form the generation cohort. The remaining 20% (n = 129) formed the validation cohort. Multiple linear regression was used to generate the algorithm using the acenocoumarol stable dosage as the dependent variable and the clinical and genotypic variables as the independent variables. The variables included in the algorithm were age, weight, amiodarone use, enzyme inducer status, international normalized ratio target range and the presence of CYP2C9*2 (rs1799853), CYP2C9*3 (rs1057910), VKORC1 (rs9923231) and CYP4F2 (rs2108622). The coefficient of determination (R2) explained by the algorithm was 52.8% in the generation cohort and 64% in the validation cohort. The following R2 values were evaluated by pathology: atrial fibrillation, 57.4%; valve replacement, 56.3%; and venous thromboembolic disease, 51.5%. When the patients were classified into 3 dosage groups according to the stable dosage (<11 mg/week, 11-21 mg/week, >21 mg/week), the percentage of correctly classified patients was higher in the intermediate group, whereas differences between pharmacogenetic and clinical algorithms increased in the extreme dosage groups. Our algorithm could improve acenocoumarol dosage selection for patients who will begin treatment with this drug, especially in

  5. A New Pharmacogenetic Algorithm to Predict the Most Appropriate Dosage of Acenocoumarol for Stable Anticoagulation in a Mixed Spanish Population

    PubMed Central

    2016-01-01

    There is a strong association between genetic polymorphisms and the acenocoumarol dosage requirements. Genotyping the polymorphisms involved in the pharmacokinetics and pharmacodynamics of acenocoumarol before starting anticoagulant therapy would result in a better quality of life and a more efficient use of healthcare resources. The objective of this study is to develop a new algorithm that includes clinical and genetic variables to predict the most appropriate acenocoumarol dosage for stable anticoagulation in a wide range of patients. We recruited 685 patients from 2 Spanish hospitals and 1 primary healthcare center. We randomly chose 80% of the patients (n = 556), considering an equitable distribution of genotypes to form the generation cohort. The remaining 20% (n = 129) formed the validation cohort. Multiple linear regression was used to generate the algorithm using the acenocoumarol stable dosage as the dependent variable and the clinical and genotypic variables as the independent variables. The variables included in the algorithm were age, weight, amiodarone use, enzyme inducer status, international normalized ratio target range and the presence of CYP2C9*2 (rs1799853), CYP2C9*3 (rs1057910), VKORC1 (rs9923231) and CYP4F2 (rs2108622). The coefficient of determination (R2) explained by the algorithm was 52.8% in the generation cohort and 64% in the validation cohort. The following R2 values were evaluated by pathology: atrial fibrillation, 57.4%; valve replacement, 56.3%; and venous thromboembolic disease, 51.5%. When the patients were classified into 3 dosage groups according to the stable dosage (<11 mg/week, 11–21 mg/week, >21 mg/week), the percentage of correctly classified patients was higher in the intermediate group, whereas differences between pharmacogenetic and clinical algorithms increased in the extreme dosage groups. Our algorithm could improve acenocoumarol dosage selection for patients who will begin treatment with this drug, especially in

  6. Bioequivalence of acenocoumarol in chilean volunteers: an open, randomized, double-blind, single-dose, 2-period, and 2-sequence crossover study for 2 oral formulations.

    PubMed

    Sasso, J; Carmona, P; Quiñones, L; Ortiz, M; Tamayo, E; Varela, N; Cáceres, D; Saavedra, I

    2012-08-01

    The aim of this study was to compare the bioavailability of an oral formulation of the coumarin derivative-vitamine K antagonist acenocoumarol (Acebron™ 4 mg, Test) with the reference formulation (Neo-Sintrom™ 4 mg). We performed a single-dose, double-blind, fasting, 2-period, 2-sequence, crossover study design. Plasma concentrations of acenocoumarol were determined using a validated UPLC-MS/MS method. 24 healthy Chilean volunteers (11 male, 13 female) were enrolled and all of them completed the study. Adverse events were monitored throughout the study. The values of the pharmacokinetic parameters were (mean ± SD): AUC0-24 =1 364.38±499.26 ngxh/mL for the test and 1 328.39±429.20 ngxh/mL for the reference; AUC0-∞ =1 786.00±732.85 ngxh/mL for the test and 1 706.71±599.66 ngxh/mL for the reference; Cmax =180.69±35.11 ng/mL with a Tmax of 1.83±0.95 h for the test and 186.97±38.21 ng/mL with a Tmax of 2.19±0.83 h for the reference. Regarding half life measurements, the mean ± SD of t1/2 were 11.84±4.54 h for the test and 11.08±3.28 h for the reference. The 90% confidence intervals for the test/reference ratio using logarithmic transformed data were 97.89-100.87%, 98.62-101.99% and 98.64-102.38% for Cmax, AUC0-t(24) and AUC0-∞. There were no significant differences in pharmacokinetic parameters between groups.The results obtained in this study lead us to conclude, based on FDA criteria, that the test acenocoumarol formulation (Acebron™, 4 mg tablets) is bioequivalent to the reference product (Neo-Sintrom™, 4 mg tablets). PMID:22773430

  7. Cytochrome P450 (CYP2C9*2,*3) & vitamin-K epoxide reductase complex (VKORC1 -1639Gacenocoumarol dose in patients with mechanical heart valve replacement

    PubMed Central

    Kaur, Anupriya; Khan, Farah; Agrawal, Suraksha S.; Kapoor, Aditya; Agarwal, Surendra K.; Phadke, Shubha R.

    2013-01-01

    Background & objectives: Studies have demonstrated the effect of CYP2C9 (cytochrome P450) and VKORC1 (vitamin K epoxide reductase complex) gene polymorphisms on the dose of acenocoumarol. The data from India about these gene polymorphisms and their effects on acenocoumarol dose are scarce. The aim of this study was to determine the occurrence of CYP2C9*2,*3 and VKORC 1 -1639G>A gene polymorphisms and to study their effects on the dose of acenocoumarol required to maintain a target International Normalized Ratio (INR) in patients with mechanical heart valve replacement. Methods: Patients from the anticoagulation clinic of a tertiary care hospital in north India were studied. The anticoagulation profile, INR (International Normalized Ratio) values and administered acenocoumarol dose were obtained from the clinical records of patients. Determination of the CYP2C9*2,*3 and VKORC1 -1639G>A genotypes was done by PCR-RFLP (restriction fragment length polymorphism). Results: A total of 111 patients were studied. The genotype frequencies of CYP2C9 *1/*1,*1/*2,*1/*3 were as 0.883, 0.072, 0.036 and that of VKORC1 -1639G>A for GG, AG, and AA genotypes were 0.883, 0.090, and 0.027, respectively. The percentage of patients carrying any of the variant alleles of CYP2C9 and VKORC1 in heterozygous or homozygous form was 34% among those receiving a low dose of ≤20 mg/wk while it was 13.8 per cent in those receiving >20 mg/wk (P=0.014). A tendency of lower dose requirements was seen among carriers of the studied polymorphisms. There was considerable variability in the dose requirements of patients with and without variant alleles. Interpretation & conclusions: The study findings point towards the role of CYP2C9 and VKORC1 gene polymorphisms in determining the inter-individual dose variability of acenocoumarol in the Indian patients with mechanical heart valve replacement. PMID:23481074

  8. Protective Effect of Pretreatment with Acenocoumarol in Cerulein-Induced Acute Pancreatitis

    PubMed Central

    Warzecha, Zygmunt; Sendur, Paweł; Ceranowicz, Piotr; Dembiński, Marcin; Cieszkowski, Jakub; Kuśnierz-Cabala, Beata; Olszanecki, Rafał; Tomaszewska, Romana; Ambroży, Tadeusz; Dembiński, Artur

    2016-01-01

    Coagulation is recognized as a key player in inflammatory and autoimmune diseases. The aim of the current research was to examine the effect of pretreatment with acenocoumarol on the development of acute pancreatitis (AP) evoked by cerulein. Methods: AP was induced in rats by cerulein administered intraperitoneally. Acenocoumarol (50, 100 or 150 µg/kg/dose/day) or saline were given once daily for seven days before AP induction. Results: In rats with AP, pretreatment with acenocoumarol administered at the dose of 50 or 100 µg/kg/dose/day improved pancreatic histology, reducing the degree of edema and inflammatory infiltration, and vacuolization of acinar cells. Moreover, pretreatment with acenocoumarol given at the dose of 50 or 100 µg/kg/dose/day reduced the AP-evoked increase in pancreatic weight, serum activity of amylase and lipase, and serum concentration of pro-inflammatory interleukin-1β, as well as ameliorated pancreatic DNA synthesis and pancreatic blood flow. In contrast, acenocoumarol given at the dose of 150 μg/kg/dose did not exhibit any protective effect against cerulein-induced pancreatitis. Conclusion: Low doses of acenocoumarol, given before induction of AP by cerulein, inhibit the development of that inflammation. PMID:27754317

  9. [Resistance to acenocoumarol revealing a missense mutation of the vitamin K epoxyde reductase VKORC1: a case report].

    PubMed

    Mboup, M C; Dia, K; Ba, D M; Fall, P D

    2015-02-01

    A significant proportion of the interindividual variability of the response to vitamin K antagonist (VKA) treatment has been associated with genetic factors. Genetic variations affecting the vitamin K epoxide reductase complex subunit 1 (VKORC1) are associated with hypersensitivity or rarely with resistance to VKA. We report the case of a black women patient who presents a resistance to acenocoumarol. Despite the use of high doses of acenocoumarol (114 mg/week) for the treatment of recurrent pulmonary embolism, the International Normalized Ratio was below the therapeutic target. This resistance to acenocoumarol was confirmed by the identification of a missense mutation Val66Met of the vitamin K epoxide reductase. PMID:24095214

  10. Influence of CYP2C9 and VKORC1 polymorphisms on warfarin and acenocoumarol in a sample of Lebanese people.

    PubMed

    Esmerian, Maria O; Mitri, Zahi; Habbal, Mohammad-Zuheir; Geryess, Eddy; Zaatari, Ghazi; Alam, Samir; Skouri, Hadi N; Mahfouz, Rami A; Taher, Ali; Zgheib, Nathalie K

    2011-10-01

    The authors assessed the impact of CYP2C9*2, CYP2C9*3, and/or VKORC1-1639G>A/1173C>T single-nucleotide polymorphisms on oral anticoagulants in a Lebanese population. This study recruited 231 Lebanese participants on long-term warfarin or acenocoumarol maintenance therapy with an international normalized ratio (INR) monitored at the American University of Beirut Medical Center. CYP2C9 and VKORC1 variant alleles were screened by real-time PCR. Plasma R- and S-warfarin and R- and S-acenocoumarol levels were assayed using high-performance liquid chromatography. The variant allele frequencies of CYP2C9*2, CYP2C9*3, and VKORC1 -1639G>A/1173C>T were 15.4%, 7.8%, and 52.4%, respectively. Fifty-five participants were excluded from analysis because of nontherapeutic INR values at recruitment, leaving 43 participants taking warfarin and 133 taking acenocoumarol. There was a significant decrease in the weekly maintenance dose of both drugs with CYP2C9 and VKORC1 variants when compared with wild-type patients. CYP2C9*2 had the least impact on the response to both drugs. The concentrations of R- and S-warfarin in plasma were significantly correlated with CYP2C9 genotypes. For acenocoumarol, time to reach target INR was more prolonged in patients carrying any CYP2C9 variant allele but failed to reach statistical significance because of low numbers of patients. There was no association between allelic variants and bleeding events. This is the first pharmacogenetic study of oral anticoagulants in Arabs. The authors showed that both CYP2C9 and VKORC1 polymorphisms are common in Lebanon and influence warfarin and acenocoumarol dose requirements, with the CYP2C9*2 polymorphism having less effect on acenocoumarol, the most commonly used oral anticoagulant in Lebanon. PMID:21148049

  11. Acenocoumarol sensitivity and pharmacokinetic characterization of CYP2C9 *5/*8,*8/*11,*9/*11 and VKORC1*2 in black African healthy Beninese subjects.

    PubMed

    Allabi, Aurel Constant; Horsmans, Yves; Alvarez, Jean-Claude; Bigot, André; Verbeeck, Roger K; Yasar, Umit; Gala, Jean-Luc

    2012-06-01

    This study aimed at investigating the contribution of CYP2C9 and VKORC1 genetic polymorphisms to inter-individual variability of acenocoumarol pharmacokinetics and pharmacodynamics in Black Africans from Benin. Fifty-one healthy volunteers were genotyped for VKORC1 1173C>T polymorphism. All of the subjects had previously been genotyped for CYP2C9*5, CYP2C9*6, CYP2C9*8, CYP2C9*9 and CYP2C9*11 alleles. Thirty-six subjects were phenotyped with a single 8 mg oral dose of acenocoumarol by measuring plasma concentrations of (R)- and (S)-acenocoumarol 8 and 24 h after the administration using chiral liquid-chromatography tandem mass-spectrometry. International normalized ratio (INR) values were determined prior to and 24 h after the drug intake. The allele frequency of VKORC1 variant (1173C>T) was 1.96% (95% CI 0.0-4.65%). The INR values did not show statistically significant difference between the CYP2C9 genotypes, but were correlated with body mass index and age at 24 h post-dosing (P < 0.05). At 8 h post dose, the (S)-acenocoumarol concentrations in the CYP2C9*5/*8 and CYP2C9*9/*11 genotypes were about 1.9 and 5.1 fold higher compared with the CYP2C9*1/*1 genotype and 2.2- and 6.0-fold higher compared with the CYP2C9*1/*9 group, respectively. The results indicated that pharmacodynamic response to acenocoumarol is highly variable between the subjects. This variability seems to be associated with CYP2C9*5/*8 and *9/*11 variant and demographic factors (age and weight) in Beninese subjects. Significant association between plasma (S)-acenocoumarol concentration and CYP2C9 genotypes suggested the use of (S)-acenocoumarol for the phenotyping purpose. Larger number of subjects is needed to study the effect of VKORC1 1173C>T variant due to its low frequency in Beninese population. PMID:21811894

  12. Determination of acenocoumarol in human plasma by capillary gas chromatography with mass-selective detection.

    PubMed

    Pommier, F; Ackermann, R; Sioufi, A; Godbillon, J

    1994-03-18

    A method for the determination of acenocoumarol in human plasma by capillary gas chromatography-mass-selective detection is described. After addition of a structurally related analogue as the internal standard, the compounds are extracted from plasma at acidic pH into toluene, back-extracted with a basic solution and re-extracted from hydrochloric acid solution with toluene, which is then evaporated to dryness. The compounds are converted into their methyl derivatives, which are determined by gas chromatography using a mass-selective detector at m/z 324 for acenocoumarol and m/z 338 for the internal standard. The reproducibility and accuracy of the method were found to be suitable over the acenocoumarol concentrations range 2.2-74 nmol/l. The method could be considered as selective for acenocoumarol in the presence of its major metabolites in plasma.

  13. Pharmacogenetic-guided Warfarin Dosing Algorithm in African-Americans.

    PubMed

    Alzubiedi, Sameh; Saleh, Mohammad I

    2016-01-01

    We aim to develop warfarin dosing algorithm for African-Americans. We explored demographic, clinical, and genetic data from a previously collected cohort of 163 African-American patients with a stable warfarin dose. We explored 2 approaches to develop the algorithm: multiple linear regression and artificial neural network (ANN). The clinical significance of the 2 dosing algorithms was evaluated by calculating the percentage of patients whose predicted dose of warfarin was within 20% of the actual dose. Linear regression model and ANN model predicted the ideal dose in 52% and 48% of the patients, respectively. The mean absolute error using linear regression model was estimated to be 10.8 mg compared with 10.9 mg using ANN. Linear regression and ANN models identified several predictors of warfarin dose including age, weight, CYP2C9 genotype *1/*1, VKORC1 genotype, rs12777823 genotype, rs2108622 genotype, congestive heart failure, and amiodarone use. In conclusion, we developed a warfarin dosing algorithm for African-Americans. The proposed dosing algorithm has the potential to recommend warfarin doses that are close to the appropriate doses. The use of more sophisticated ANN approach did not result in improved predictive performance of the dosing algorithm except for patients of a dose of ≥49 mg/wk.

  14. Pharmacogenetic-guided Warfarin Dosing Algorithm in African-Americans.

    PubMed

    Alzubiedi, Sameh; Saleh, Mohammad I

    2016-01-01

    We aim to develop warfarin dosing algorithm for African-Americans. We explored demographic, clinical, and genetic data from a previously collected cohort of 163 African-American patients with a stable warfarin dose. We explored 2 approaches to develop the algorithm: multiple linear regression and artificial neural network (ANN). The clinical significance of the 2 dosing algorithms was evaluated by calculating the percentage of patients whose predicted dose of warfarin was within 20% of the actual dose. Linear regression model and ANN model predicted the ideal dose in 52% and 48% of the patients, respectively. The mean absolute error using linear regression model was estimated to be 10.8 mg compared with 10.9 mg using ANN. Linear regression and ANN models identified several predictors of warfarin dose including age, weight, CYP2C9 genotype *1/*1, VKORC1 genotype, rs12777823 genotype, rs2108622 genotype, congestive heart failure, and amiodarone use. In conclusion, we developed a warfarin dosing algorithm for African-Americans. The proposed dosing algorithm has the potential to recommend warfarin doses that are close to the appropriate doses. The use of more sophisticated ANN approach did not result in improved predictive performance of the dosing algorithm except for patients of a dose of ≥49 mg/wk. PMID:26355760

  15. A TLD dose algorithm using artificial neural networks

    SciTech Connect

    Moscovitch, M.; Rotunda, J.E.; Tawil, R.A.; Rathbone, B.A.

    1995-12-31

    An artificial neural network was designed and used to develop a dose algorithm for a multi-element thermoluminescence dosimeter (TLD). The neural network architecture is based on the concept of functional links network (FLN). Neural network is an information processing method inspired by the biological nervous system. A dose algorithm based on neural networks is fundamentally different as compared to conventional algorithms, as it has the capability to learn from its own experience. The neural network algorithm is shown the expected dose values (output) associated with given responses of a multi-element dosimeter (input) many times. The algorithm, being trained that way, eventually is capable to produce its own unique solution to similar (but not exactly the same) dose calculation problems. For personal dosimetry, the output consists of the desired dose components: deep dose, shallow dose and eye dose. The input consists of the TL data obtained from the readout of a multi-element dosimeter. The neural network approach was applied to the Harshaw Type 8825 TLD, and was shown to significantly improve the performance of this dosimeter, well within the U.S. accreditation requirements for personnel dosimeters.

  16. Effectiveness and safety of dabigatran versus acenocoumarol in ‘real-world’ patients with atrial fibrillation

    PubMed Central

    Korenstra, Jennie; Wijtvliet, E. Petra J.; Veeger, Nic J.G.M.; Geluk, Christiane A.; Bartels, G. Louis; Posma, Jan L.; Piersma-Wichers, Margriet; Van Gelder, Isabelle C.; Rienstra, Michiel; Tieleman, Robert G.

    2016-01-01

    Aims Randomized trials showed non-inferior or superior results of the non-vitamin-K-antagonist oral anticoagulants (NOACs) compared with warfarin. The aim of this study was to assess the effectiveness and safety of dabigatran (direct thrombin inhibitor) vs. acenocoumarol (vitamin K antagonist) in patients with atrial fibrillation (AF) in daily clinical practice. Methods and results In this observational study, we evaluated all consecutive patients who started anticoagulation because of AF in our outpatient clinic from 2010 to 2013. Data were collected from electronic patient charts. Primary outcomes were stroke or systemic embolism and major bleeding. Propensity score matching was applied to address the non-randomized design. In total, 920 consecutive AF patients were enrolled (442 dabigatran, 478 acenocoumarol), of which 2 × 383 were available for analysis after propensity score matching. Mean follow-up duration was 1.5 ± 0.56 year. The mean calculated stroke risk according to the CHA2DS2-VASc score was 3.5%/year in dabigatran vs. 3.7%/year acenocoumarol-treated patients. The actual incidence rate of stroke or systemic embolism was 0.8%/year [95% confidence interval (CI): 0.2–2.1] vs. 1.0%/year (95% CI: 0.4–2.1), respectively. Multivariable analysis confirmed this lower but non-significant risk in dabigatran vs. acenocoumarol after adjustment for the CHA2DS2-VASc score [hazard ratio (HR)dabigatran = 0.72, 95% CI: 0.20–2.63, P = 0.61]. According to the HAS-BLED score, the mean calculated bleeding risk was 1.7%/year in both groups. Actual incidence rate of major bleeding was 2.1%/year (95% CI: 1.0–3.8) in the dabigatran vs. 4.3%/year (95% CI: 2.9–6.2) in acenocoumarol. This over 50% reduction remained significant after adjustment for the HAS-BLED score (HRdabigatran = 0.45, 95% CI: 0.22–0.93, P = 0.031). Conclusion In ‘real-world’ patients with AF, dabigatran appears to be as effective, but significantly safer than acenocoumarol. PMID:26843571

  17. Impact of dose calculation algorithm on radiation therapy

    PubMed Central

    Chen, Wen-Zhou; Xiao, Ying; Li, Jun

    2014-01-01

    The quality of radiation therapy depends on the ability to maximize the tumor control probability while minimize the normal tissue complication probability. Both of these two quantities are directly related to the accuracy of dose distributions calculated by treatment planning systems. The commonly used dose calculation algorithms in the treatment planning systems are reviewed in this work. The accuracy comparisons among these algorithms are illustrated by summarizing the highly cited research papers on this topic. Further, the correlation between the algorithms and tumor control probability/normal tissue complication probability values are manifested by several recent studies from different groups. All the cases demonstrate that dose calculation algorithms play a vital role in radiation therapy. PMID:25431642

  18. The grid-dose-spreading algorithm for dose distribution calculation in heavy charged particle radiotherapy

    SciTech Connect

    Kanematsu, Nobuyuki; Yonai, Shunsuke; Ishizaki, Azusa

    2008-02-15

    A new variant of the pencil-beam (PB) algorithm for dose distribution calculation for radiotherapy with protons and heavier ions, the grid-dose spreading (GDS) algorithm, is proposed. The GDS algorithm is intrinsically faster than conventional PB algorithms due to approximations in convolution integral, where physical calculations are decoupled from simple grid-to-grid energy transfer. It was effortlessly implemented to a carbon-ion radiotherapy treatment planning system to enable realistic beam blurring in the field, which was absent with the broad-beam (BB) algorithm. For a typical prostate treatment, the slowing factor of the GDS algorithm relative to the BB algorithm was 1.4, which is a great improvement over the conventional PB algorithms with a typical slowing factor of several tens. The GDS algorithm is mathematically equivalent to the PB algorithm for horizontal and vertical coplanar beams commonly used in carbon-ion radiotherapy while dose deformation within the size of the pristine spread occurs for angled beams, which was within 3 mm for a single 150-MeV proton pencil beam of 30 deg. incidence, and needs to be assessed against the clinical requirements and tolerances in practical situations.

  19. Dosing Algorithms to Predict Warfarin Maintenance Dose in Caucasians and African Americans

    PubMed Central

    Schelleman, Hedi; Chen, Jinbo; Chen, Zhen; Christie, Jason; Newcomb, Craig W.; Brensinger, Colleen M.; Price, Maureen; Whitehead, Alexander S.; Kealey, Carmel; Thorn, Caroline F.; Samaha, Frederick F.; Kimmel, Stephen E

    2008-01-01

    Objectives The objective of this study was to determine whether clinical, environmental, and genetic factors can be used to develop dosing algorithms for Caucasians and African Americans that perform better than giving empirical 5 mg/day. Methods From April 2002 through December 2005, 259 warfarin initiators were prospectively followed until they reached maintenance dose. Results The Caucasian algorithm included 11 variables (R2=0.43). This model (51% within 1 mg) performed better compared with 5 mg/day (29% within 5±1 mg). The African American algorithm included 10 variables (R2=0.28). This model predicted 37% of doses within 1 mg of the observed dose; a small improvement compared with 5 mg/day (34%). These results were similar to the results we obtained from testing other (published) algorithms. Conclusions The dosing algorithms in Caucasians explained <45% of the variability and the algorithms in African Americans performed only marginally better than giving 5 mg empirically. PMID:18596683

  20. Verification of IMRT dose calculations using AAA and PBC algorithms in dose buildup regions.

    PubMed

    Oinam, Arun S; Singh, Lakhwant

    2010-08-26

    The purpose of this comparative study was to test the accuracy of anisotropic analytical algorithm (AAA) and pencil beam convolution (PBC) algorithms of Eclipse treatment planning system (TPS) for dose calculations in the low- and high-dose buildup regions. AAA and PBC algorithms were used to create two intensity-modulated radiotherapy (IMRT) plans of the same optimal fluence generated from a clinically simulated oropharynx case in an in-house fabricated head and neck phantom. The TPS computed buildup doses were compared with the corresponding measured doses in the phantom using thermoluminescence dosimeters (TLD 100). Analysis of dose distribution calculated using PBC and AAA shows an increase in gamma value in the dose buildup region indicating large dose deviation. For the surface areas of 1, 50 and 100 cm2, PBC overestimates doses as compared to AAA calculated value in the range of 1.34%-3.62% at 0.6 cm depth, 1.74%-2.96% at 0.4 cm depth, and 1.96%-4.06% at 0.2 cm depth, respectively. In high-dose buildup region, AAA calculated doses were lower by an average of -7.56% (SD = 4.73%), while PBC was overestimated by 3.75% (SD = 5.70%) as compared to TLD measured doses at 0.2 cm depth. However, at 0.4 and 0.6 cm depth, PBC overestimated TLD measured doses by 5.84% (SD = 4.38%) and 2.40% (SD = 4.63%), respectively, while AAA underestimated the TLD measured doses by -0.82% (SD = 4.24%) and -1.10% (SD = 4.14%) at the same respective depth. In low-dose buildup region, both AAA and PBC overestimated the TLD measured doses at all depths except -2.05% (SD = 10.21%) by AAA at 0.2 cm depth. The differences between AAA and PBC at all depths were statistically significant (p < 0.05) in high-dose buildup region, whereas it is not statistically significant in low-dose buildup region. In conclusion, AAA calculated the dose more accurately than PBC in clinically important high-dose buildup region at 0.4 cm and 0.6 cm depths. The use of an orfit cast increases the dose buildup

  1. Algorithm-enabled low-dose micro-CT imaging.

    PubMed

    Han, Xiao; Bian, Junguo; Eaker, Diane R; Kline, Timothy L; Sidky, Emil Y; Ritman, Erik L; Pan, Xiaochuan

    2011-03-01

    Micro-computed tomography (micro-CT) is an important tool in biomedical research and preclinical applications that can provide visual inspection of and quantitative information about imaged small animals and biological samples such as vasculature specimens. Currently, micro-CT imaging uses projection data acquired at a large number (300-1000) of views, which can limit system throughput and potentially degrade image quality due to radiation-induced deformation or damage to the small animal or specimen. In this work, we have investigated low-dose micro-CT and its application to specimen imaging from substantially reduced projection data by using a recently developed algorithm, referred to as the adaptive-steepest-descent-projection-onto-convex-sets (ASD-POCS) algorithm, which reconstructs an image through minimizing the image total-variation and enforcing data constraints. To validate and evaluate the performance of the ASD-POCS algorithm, we carried out quantitative evaluation studies in a number of tasks of practical interest in imaging of specimens of real animal organs. The results show that the ASD-POCS algorithm can yield images with quality comparable to that obtained with existing algorithms, while using one-sixth to one quarter of the 361-view data currently used in typical micro-CT specimen imaging.

  2. Performance of dose calculation algorithms from three generations in lung SBRT: comparison with full Monte Carlo-based dose distributions.

    PubMed

    Ojala, Jarkko J; Kapanen, Mika K; Hyödynmaa, Simo J; Wigren, Tuija K; Pitkänen, Maunu A

    2014-01-01

    The accuracy of dose calculation is a key challenge in stereotactic body radiotherapy (SBRT) of the lung. We have benchmarked three photon beam dose calculation algorithms--pencil beam convolution (PBC), anisotropic analytical algorithm (AAA), and Acuros XB (AXB)--implemented in a commercial treatment planning system (TPS), Varian Eclipse. Dose distributions from full Monte Carlo (MC) simulations were regarded as a reference. In the first stage, for four patients with central lung tumors, treatment plans using 3D conformal radiotherapy (CRT) technique applying 6 MV photon beams were made using the AXB algorithm, with planning criteria according to the Nordic SBRT study group. The plans were recalculated (with same number of monitor units (MUs) and identical field settings) using BEAMnrc and DOSXYZnrc MC codes. The MC-calculated dose distributions were compared to corresponding AXB-calculated dose distributions to assess the accuracy of the AXB algorithm, to which then other TPS algorithms were compared. In the second stage, treatment plans were made for ten patients with 3D CRT technique using both the PBC algorithm and the AAA. The plans were recalculated (with same number of MUs and identical field settings) with the AXB algorithm, then compared to original plans. Throughout the study, the comparisons were made as a function of the size of the planning target volume (PTV), using various dose-volume histogram (DVH) and other parameters to quantitatively assess the plan quality. In the first stage also, 3D gamma analyses with threshold criteria 3%/3mm and 2%/2 mm were applied. The AXB-calculated dose distributions showed relatively high level of agreement in the light of 3D gamma analysis and DVH comparison against the full MC simulation, especially with large PTVs, but, with smaller PTVs, larger discrepancies were found. Gamma agreement index (GAI) values between 95.5% and 99.6% for all the plans with the threshold criteria 3%/3 mm were achieved, but 2%/2 mm

  3. [Comparison of dose calculation algorithms in stereotactic radiation therapy in lung].

    PubMed

    Tomiyama, Yuki; Araki, Fujio; Kanetake, Nagisa; Shimohigashi, Yoshinobu; Tominaga, Hirofumi; Sakata, Jyunichi; Oono, Takeshi; Kouno, Tomohiro; Hioki, Kazunari

    2013-06-01

    Dose calculation algorithms in radiation treatment planning systems (RTPSs) play a crucial role in stereotactic body radiation therapy (SBRT) in the lung with heterogeneous media. This study investigated the performance and accuracy of dose calculation for three algorithms: analytical anisotropic algorithm (AAA), pencil beam convolution (PBC) and Acuros XB (AXB) in Eclipse (Varian Medical Systems), by comparison against the Voxel Monte Carlo algorithm (VMC) in iPlan (BrainLab). The dose calculations were performed with clinical lung treatments under identical planning conditions, and the dose distributions and the dose volume histogram (DVH) were compared among algorithms. AAA underestimated the dose in the planning target volume (PTV) compared to VMC and AXB in most clinical plans. In contrast, PBC overestimated the PTV dose. AXB tended to slightly overestimate the PTV dose compared to VMC but the discrepancy was within 3%. The discrepancy in the PTV dose between VMC and AXB appears to be due to differences in physical material assignments, material voxelization methods, and an energy cut-off for electron interactions. The dose distributions in lung treatments varied significantly according to the calculation accuracy of the algorithms. VMC and AXB are better algorithms than AAA for SBRT. PMID:23782779

  4. Pediatric chest HRCT using the iDose4 Hybrid Iterative Reconstruction Algorithm: Which iDose level to choose?

    NASA Astrophysics Data System (ADS)

    Smarda, M.; Alexopoulou, E.; Mazioti, A.; Kordolaimi, S.; Ploussi, A.; Priftis, K.; Efstathopoulos, E.

    2015-09-01

    Purpose of the study is to determine the appropriate iterative reconstruction (IR) algorithm level that combines image quality and diagnostic confidence, for pediatric patients undergoing high-resolution computed tomography (HRCT). During the last 2 years, a total number of 20 children up to 10 years old with a clinical presentation of chronic bronchitis underwent HRCT in our department's 64-detector row CT scanner using the iDose IR algorithm, with almost similar image settings (80kVp, 40-50 mAs). CT images were reconstructed with all iDose levels (level 1 to 7) as well as with filtered-back projection (FBP) algorithm. Subjective image quality was evaluated by 2 experienced radiologists in terms of image noise, sharpness, contrast and diagnostic acceptability using a 5-point scale (1=excellent image, 5=non-acceptable image). Artifacts existance was also pointed out. All mean scores from both radiologists corresponded to satisfactory image quality (score ≤3), even with the FBP algorithm use. Almost excellent (score <2) overall image quality was achieved with iDose levels 5 to 7, but oversmoothing artifacts appearing with iDose levels 6 and 7 affected the diagnostic confidence. In conclusion, the use of iDose level 5 enables almost excellent image quality without considerable artifacts affecting the diagnosis. Further evaluation is needed in order to draw more precise conclusions.

  5. Comparison of dose calculation algorithms for colorectal cancer brachytherapy treatment with a shielded applicator

    SciTech Connect

    Yan Xiangsheng; Poon, Emily; Reniers, Brigitte; Vuong, Te; Verhaegen, Frank

    2008-11-15

    Colorectal cancer patients are treated at our hospital with {sup 192}Ir high dose rate (HDR) brachytherapy using an applicator that allows the introduction of a lead or tungsten shielding rod to reduce the dose to healthy tissue. The clinical dose planning calculations are, however, currently performed without taking the shielding into account. To study the dose distributions in shielded cases, three techniques were employed. The first technique was to adapt a shielding algorithm which is part of the Nucletron PLATO HDR treatment planning system. The isodose pattern exhibited unexpected features but was found to be a reasonable approximation. The second technique employed a ray tracing algorithm that assigns a constant dose ratio with/without shielding behind the shielding along a radial line originating from the source. The dose calculation results were similar to the results from the first technique but with improved accuracy. The third and most accurate technique used a dose-matrix-superposition algorithm, based on Monte Carlo calculations. The results from the latter technique showed quantitatively that the dose to healthy tissue is reduced significantly in the presence of shielding. However, it was also found that the dose to the tumor may be affected by the presence of shielding; for about a quarter of the patients treated the volume covered by the 100% isodose lines was reduced by more than 5%, leading to potential tumor cold spots. Use of any of the three shielding algorithms results in improved dose estimates to healthy tissue and the tumor.

  6. Novel lung IMRT planning algorithms with nonuniform dose delivery strategy to account for respiratory motion.

    PubMed

    Li, Xiang; Zhang, Pengpeng; Mah, Dennis; Gewanter, Richard; Kutcher, Gerald

    2006-09-01

    To effectively deliver radiation dose to lung tumors, respiratory motion has to be considered in treatment planning. In this paper we first present a new lung IMRT planning algorithm, referred as the dose shaping (DS) method, that shapes the dose distribution according to the probability distribution of the tumor over the breathing cycle to account for respiratory motion. In IMRT planning a dose-based convolution method was generally adopted to compensate for random organ motion by performing 4-D dose calculations using a tumor motion probability density function. We modified the CON-DOSE method to a dose volume histogram based convolution method (CON-DVH) that allows nonuniform dose distribution to account for respiratory motion. We implemented the two new planning algorithms on an in-house IMRT planning system that uses the Eclipse (Varian, Palo Alto, CA) planning workstation as the dose calculation engine. The new algorithms were compared with (1) the conventional margin extension approach in which margin is generated based on the extreme positions of the tumor, (2) the dose-based convolution method, and (3) gating with 3 mm residual motion. Dose volume histogram, tumor control probability, normal tissue complication probability, and mean lung dose were calculated and used to evaluate the relative performance of these approaches at the end-exhale phase of the respiratory cycle. We recruited six patients in our treatment planning study. The study demonstrated that the two new methods could significantly reduce the ipsilateral normal lung dose and outperformed the margin extension method and the dose-based convolution method. Compared with the gated approach that has the best performance in the low dose region, the two methods we proposed have similar potential to escalate tumor dose, but could be more efficient because dose is delivered continuously. PMID:17022235

  7. Experimental verification of the planned dose perturbation algorithm in an anthropomorphic phantom

    NASA Astrophysics Data System (ADS)

    Feygelman, V.; Opp, D.; Zhang, G.; Stevens, C.; Nelms, B.

    2013-06-01

    3DVH software is capable of generating a volumetric patient VMAT dose by applying a volumetric perturbation algorithm based on comparing measurement-guided dose reconstruction and TPS calculated dose to a cylindrical phantom. The primary purpose of this paper is to validate this dose reconstruction method on an anthropomorphic heterogeneous thoracic phantom by direct comparison to independent measurements. Reconstructed "patient" doses compare well with the independent dose profile measurements in the unit-density target inside a thoracic phantom lung. The largest differences are observed in lung and are associated with the highly modulated plan with narrow (few mm) MLC openings. Such a plan is instructive as a stress test of the algorithm but is not likely to be clinically encountered in lung. This residual disagreement underscores the fact that 3DVH is not designed to correct the errors related to the TPS dose calculations in the low-density media.

  8. Towards Rational Dosing Algorithms for Vancomycin in Neonates and Infants Based on Population Pharmacokinetic Modeling

    PubMed Central

    Janssen, Esther J. H.; Välitalo, Pyry A. J.; Allegaert, Karel; de Cock, Roosmarijn F. W.; Simons, Sinno H. P.; Sherwin, Catherine M. T.; van den Anker, Johannes N.

    2015-01-01

    Because of the recent awareness that vancomycin doses should aim to meet a target area under the concentration-time curve (AUC) instead of trough concentrations, more aggressive dosing regimens are warranted also in the pediatric population. In this study, both neonatal and pediatric pharmacokinetic models for vancomycin were externally evaluated and subsequently used to derive model-based dosing algorithms for neonates, infants, and children. For the external validation, predictions from previously published pharmacokinetic models were compared to new data. Simulations were performed in order to evaluate current dosing regimens and to propose a model-based dosing algorithm. The AUC/MIC over 24 h (AUC24/MIC) was evaluated for all investigated dosing schedules (target of >400), without any concentration exceeding 40 mg/liter. Both the neonatal and pediatric models of vancomycin performed well in the external data sets, resulting in concentrations that were predicted correctly and without bias. For neonates, a dosing algorithm based on body weight at birth and postnatal age is proposed, with daily doses divided over three to four doses. For infants aged <1 year, doses between 32 and 60 mg/kg/day over four doses are proposed, while above 1 year of age, 60 mg/kg/day seems appropriate. As the time to reach steady-state concentrations varies from 155 h in preterm infants to 36 h in children aged >1 year, an initial loading dose is proposed. Based on the externally validated neonatal and pediatric vancomycin models, novel dosing algorithms are proposed for neonates and children aged <1 year. For children aged 1 year and older, the currently advised maintenance dose of 60 mg/kg/day seems appropriate. PMID:26643337

  9. TVA's dose algorithm for Panasonic type 802 TLDs

    SciTech Connect

    Colvett, R.D.; Gupta, V.P.; Hudson, C.G. )

    1988-10-01

    The TVA algorithm for interpreting readings from Panasonic type 802 multi-element TLDs uses a calculational method similar to that for unfolding neutron spectra. A response matrix is constructed for the four elements in the Panasonic TLD based on tests performed in a variety of single-component radiation fields. The matrix is then used to unfold the responses of the elements when the dosimeter is exposed to mixed radiation fields. In this paper the response metric and the calculational method are described in detail, and test results are presented that verify the algorithm's effectiveness.

  10. A finite size pencil beam algorithm for IMRT dose optimization: density corrections.

    PubMed

    Jeleń, U; Alber, M

    2007-02-01

    For beamlet-based IMRT optimization, fast and less accurate dose computation algorithms are frequently used, while more accurate algorithms are needed to recompute the final dose for verification. In order to speed up the optimization process and ensure close proximity between dose in optimization and verification, proper consideration of dose gradients and tissue inhomogeneity effects should be ensured at every stage of the optimization. Due to their speed, pencil beam algorithms are often used for precalculation of beamlet dose distributions in IMRT treatment planning systems. However, accounting for tissue heterogeneities with these models requires the use of approximate rescaling methods. Recently, a finite size pencil beam (fsPB) algorithm, based on a simple and small set of data, was proposed which was specifically designed for the purpose of dose pre-computation in beamlet-based IMRT. The present work describes the incorporation of 3D density corrections, based on Monte Carlo simulations in heterogeneous phantoms, into this method improving the algorithm accuracy in inhomogeneous geometries while keeping its original speed and simplicity of commissioning. The algorithm affords the full accuracy of 3D density corrections at every stage of the optimization, hence providing the means for density related fluence modulation like penumbra shaping at field edges. PMID:17228109

  11. Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-03-01

    The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.

  12. Comparison of selected dose calculation algorithms in radiotherapy treatment planning for tissues with inhomogeneities

    NASA Astrophysics Data System (ADS)

    Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.

    2016-03-01

    Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.

  13. Dose-calculation algorithms in the context of inhomogeneity corrections for high energy photon beams

    SciTech Connect

    Papanikolaou, Niko; Stathakis, Sotirios

    2009-10-15

    Radiation therapy has witnessed a plethora of innovations and developments in the past 15 years. Since the introduction of computed tomography for treatment planning there has been a steady introduction of new methods to refine treatment delivery. Imaging continues to be an integral part of the planning, but also the delivery, of modern radiotherapy. However, all the efforts of image guided radiotherapy, intensity-modulated planning and delivery, adaptive radiotherapy, and everything else that we pride ourselves in having in the armamentarium can fall short, unless there is an accurate dose-calculation algorithm. The agreement between the calculated and delivered doses is of great significance in radiation therapy since the accuracy of the absorbed dose as prescribed determines the clinical outcome. Dose-calculation algorithms have evolved greatly over the years in an effort to be more inclusive of the effects that govern the true radiation transport through the human body. In this Vision 20/20 paper, we look back to see how it all started and where things are now in terms of dose algorithms for photon beams and the inclusion of tissue heterogeneities. Convolution-superposition algorithms have dominated the treatment planning industry for the past few years. Monte Carlo techniques have an inherent accuracy that is superior to any other algorithm and as such will continue to be the gold standard, along with measurements, and maybe one day will be the algorithm of choice for all particle treatment planning in radiation therapy.

  14. Dose prediction accuracy of anisotropic analytical algorithm and pencil beam convolution algorithm beyond high density heterogeneity interface

    PubMed Central

    Rana, Suresh B.

    2013-01-01

    Purpose: It is well known that photon beam radiation therapy requires dose calculation algorithms. The objective of this study was to measure and assess the ability of pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) to predict doses beyond high density heterogeneity. Materials and Methods: An inhomogeneous phantom of five layers was created in Eclipse planning system (version 8.6.15). Each layer of phantom was assigned in terms of water (first or top), air (second), water (third), bone (fourth), and water (fifth or bottom) medium. Depth doses in water (bottom medium) were calculated for 100 monitor units (MUs) with 6 Megavoltage (MV) photon beam for different field sizes using AAA and PBC with heterogeneity correction. Combinations of solid water, Poly Vinyl Chloride (PVC), and Styrofoam were then manufactured to mimic phantoms and doses for 100 MUs were acquired with cylindrical ionization chamber at selected depths beyond high density heterogeneity interface. The measured and calculated depth doses were then compared. Results: AAA's values had better agreement with measurements at all measured depths. Dose overestimation by AAA (up to 5.3%) and by PBC (up to 6.7%) was found to be higher in proximity to the high-density heterogeneity interface, and the dose discrepancies were more pronounced for larger field sizes. The errors in dose estimation by AAA and PBC may be due to improper beam modeling of primary beam attenuation or lateral scatter contributions or combination of both in heterogeneous media that include low and high density materials. Conclusions: AAA is more accurate than PBC for dose calculations in treating deep-seated tumor beyond high-density heterogeneity interface. PMID:24455541

  15. SU-E-T-67: Clinical Implementation and Evaluation of the Acuros Dose Calculation Algorithm

    SciTech Connect

    Yan, C; Combine, T; Dickens, K; Wynn, R; Pavord, D; Huq, M

    2014-06-01

    Purpose: The main aim of the current study is to present a detailed description of the implementation of the Acuros XB Dose Calculation Algorithm, and subsequently evaluate its clinical impacts by comparing it with AAA algorithm. Methods: The source models for both Acuros XB and AAA were configured by importing the same measured beam data into Eclipse treatment planning system. Both algorithms were evaluated by comparing calculated dose with measured dose on a homogeneous water phantom for field sizes ranging from 6cm × 6cm to 40cm × 40cm. Central axis and off-axis points with different depths were chosen for the comparison. Similarly, wedge fields with wedge angles from 15 to 60 degree were used. In addition, variable field sizes for a heterogeneous phantom were used to evaluate the Acuros algorithm. Finally, both Acuros and AAA were tested on VMAT patient plans for various sites. Does distributions and calculation time were compared. Results: On average, computation time is reduced by at least 50% by Acuros XB compared with AAA on single fields and VMAT plans. When used for open 6MV photon beams on homogeneous water phantom, both Acuros XB and AAA calculated doses were within 1% of measurement. For 23 MV photon beams, the calculated doses were within 1.5% of measured doses for Acuros XB and 2% for AAA. When heterogeneous phantom was used, Acuros XB also improved on accuracy. Conclusion: Compared with AAA, Acuros XB can improve accuracy while significantly reduce computation time for VMAT plans.

  16. An algorithm to calculate a collapsed arc dose matrix in volumetric modulated arc therapy

    SciTech Connect

    Arumugam, Sankar; Xing Aitang; Jameson, Michael; Holloway, Lois

    2013-07-15

    Purpose: The delivery of volumetric modulated arc therapy (VMAT) is more complex than other conformal radiotherapy techniques. In this work, the authors present the feasibility of performing routine verification of VMAT delivery using a dose matrix measured by a gantry mounted 2D ion chamber array and corresponding dose matrix calculated by an inhouse developed algorithm.Methods: Pinnacle, v9.0, treatment planning system (TPS) was used in this study to generate VMAT plans for a 6 MV photon beam from an Elekta-Synergy linear accelerator. An algorithm was developed and implemented with inhouse computer code to calculate the dose matrix resulting from a VMAT arc in a plane perpendicular to the beam at isocenter. The algorithm was validated using measurement of standard patterns and clinical VMAT plans with a 2D ion chamber array. The clinical VMAT plans were also validated using ArcCHECK measurements. The measured and calculated dose matrices were compared using gamma ({gamma}) analysis with 3%/3 mm criteria and {gamma} tolerance of 1.Results: The dose matrix comparison of standard patterns has shown excellent agreement with the mean {gamma} pass rate 97.7 ({sigma}= 0.4)%. The validation of clinical VMAT plans using the dose matrix predicted by the algorithm and the corresponding measured dose matrices also showed good agreement with the mean {gamma} pass rate of 97.6 ({sigma}= 1.6)%. The validation of clinical VMAT plans using ArcCHECK measurements showed a mean pass rate of 95.6 ({sigma}= 1.8)%.Conclusions: The developed algorithm was shown to accurately predict the dose matrix, in a plane perpendicular to the beam, by considering all possible leaf trajectories in a VMAT delivery. This enables the verification of VMAT delivery using a 2D array detector mounted on a treatment head.

  17. Busulfan dosing algorithm and sampling strategy in stem cell transplantation patients

    PubMed Central

    de Castro, Francine A; Piana, Chiara; Simões, Belinda P; Lanchote, Vera L; Della Pasqua, O

    2015-01-01

    Aim The aim of this investigation was to develop a model-based dosing algorithm for busulfan and identify an optimal sampling scheme for use in routine clinical practice. Methods Clinical data from an ongoing study (n = 29) in stem cell transplantation patients were used for the purposes our analysis. A one compartment model was selected as basis for sampling optimization and subsequent evaluation of a suitable dosing algorithm. Internal and external model validation procedures were performed prior to the optimization steps using ED-optimality criteria. Using systemic exposure as parameter of interest, dosing algorithms were considered for individual patients with the scope of minimizing the deviation from target range as determined by AUC(0,6 h). Results Busulfan exposure after oral administration was best predicted after the inclusion of adjusted ideal body weight and alanine transferase as covariates on clearance. Population parameter estimates were 3.98 h–1, 48.8 l and 12.3 l h–1 for the absorption rate constant, volume of distribution and oral clearance, respectively. Inter-occasion variability was used to describe the differences between test dose and treatment. Based on simulation scenarios, a dosing algorithm was identified, which ensures target exposure values are attained after a test dose. Moreover, our findings show that a sparse sampling scheme with five samples per patient is sufficient to characterize the pharmacokinetics of busulfan in individual patients. Conclusion The use of the proposed dosing algorithm in conjunction with a sparse sampling scheme may contribute to considerable improvement in the safety and efficacy profile of patients undergoing treatment for stem cell transplantation. PMID:25819742

  18. Feasibility study of dose reduction in digital breast tomosynthesis using non-local denoising algorithms

    NASA Astrophysics Data System (ADS)

    Vieira, Marcelo A. C.; de Oliveira, Helder C. R.; Nunes, Polyana F.; Borges, Lucas R.; Bakic, Predrag R.; Barufaldi, Bruno; Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2015-03-01

    The main purpose of this work is to study the ability of denoising algorithms to reduce the radiation dose in Digital Breast Tomosynthesis (DBT) examinations. Clinical use of DBT is normally performed in "combo-mode", in which, in addition to DBT projections, a 2D mammogram is taken with the standard radiation dose. As a result, patients have been exposed to radiation doses higher than used in digital mammography. Thus, efforts to reduce the radiation dose in DBT examinations are of great interest. However, a decrease in dose leads to an increased quantum noise level, and related decrease in image quality. This work is aimed at addressing this problem by the use of denoising techniques, which could allow for dose reduction while keeping the image quality acceptable. We have studied two "state of the art" denoising techniques for filtering the quantum noise due to the reduced dose in DBT projections: Non-local Means (NLM) and Block-matching 3D (BM3D). We acquired DBT projections at different dose levels of an anthropomorphic physical breast phantom with inserted simulated microcalcifications. Then, we found the optimal filtering parameters where the denoising algorithms are capable of recovering the quality from the DBT images acquired with the standard radiation dose. Results using objective image quality assessment metrics showed that BM3D algorithm achieved better noise adjustment (mean difference in peak signal to noise ratio < 0.1dB) and less blurring (mean difference in image sharpness ~ 6%) than the NLM for the projections acquired with lower radiation doses.

  19. Evaluation of a new commercial Monte Carlo dose calculation algorithm for electron beams

    SciTech Connect

    Vandervoort, Eric J. Cygler, Joanna E.; Tchistiakova, Ekaterina; La Russa, Daniel J.

    2014-02-15

    Purpose: In this report the authors present the validation of a Monte Carlo dose calculation algorithm (XiO EMC from Elekta Software) for electron beams. Methods: Calculated and measured dose distributions were compared for homogeneous water phantoms and for a 3D heterogeneous phantom meant to approximate the geometry of a trachea and spine. Comparisons of measurements and calculated data were performed using 2D and 3D gamma index dose comparison metrics. Results: Measured outputs agree with calculated values within estimated uncertainties for standard and extended SSDs for open applicators, and for cutouts, with the exception of the 17 MeV electron beam at extended SSD for cutout sizes smaller than 5 × 5 cm{sup 2}. Good agreement was obtained between calculated and experimental depth dose curves and dose profiles (minimum number of measurements that pass a 2%/2 mm agreement 2D gamma index criteria for any applicator or energy was 97%). Dose calculations in a heterogeneous phantom agree with radiochromic film measurements (>98% of pixels pass a 3 dimensional 3%/2 mm γ-criteria) provided that the steep dose gradient in the depth direction is considered. Conclusions: Clinically acceptable agreement (at the 2%/2 mm level) between the measurements and calculated data for measurements in water are obtained for this dose calculation algorithm. Radiochromic film is a useful tool to evaluate the accuracy of electron MC treatment planning systems in heterogeneous media.

  20. A pharmacogenetics-based warfarin maintenance dosing algorithm from Northern Chinese patients.

    PubMed

    Chen, Jinxing; Shao, Liying; Gong, Ling; Luo, Fang; Wang, Jin'e; Shi, Yi; Tan, Yu; Chen, Qianlong; Zhang, Yu; Hui, Rutai; Wang, Yibo

    2014-01-01

    Inconsistent associations with warfarin dose were observed in genetic variants except VKORC1 haplotype and CYP2C9*3 in Chinese people, and few studies on warfarin dose algorithm was performed in a large Chinese Han population lived in Northern China. Of 787 consenting patients with heart-valve replacements who were receiving long-term warfarin maintenance therapy, 20 related Single nucleotide polymorphisms were genotyped. Only VKORC1 and CYP2C9 SNPs were observed to be significantly associated with warfarin dose. In the derivation cohort (n = 551), warfarin dose variability was influenced, in decreasing order, by VKORC1 rs7294 (27.3%), CYP2C9*3(7.0%), body surface area(4.2%), age(2.7%), target INR(1.4%), CYP4F2 rs2108622 (0.7%), amiodarone use(0.6%), diabetes mellitus(0.6%), and digoxin use(0.5%), which account for 45.1% of the warfarin dose variability. In the validation cohort (n = 236), the actual maintenance dose was significantly correlated with predicted dose (r = 0.609, P<0.001). Our algorithm could improve the personalized management of warfarin use in Northern Chinese patients. PMID:25126975

  1. SU-E-T-313: The Accuracy of the Acuros XB Advanced Dose Calculation Algorithm for IMRT Dose Distributions in Head and Neck

    SciTech Connect

    Araki, F; Onizuka, R; Ohno, T; Tomiyama, Y; Hioki, K

    2014-06-01

    Purpose: To investigate the accuracy of the Acuros XB version 11 (AXB11) advanced dose calculation algorithm by comparing with Monte Caro (MC) calculations. The comparisons were performed with dose distributions for a virtual inhomogeneity phantom and intensity-modulated radiotherapy (IMRT) in head and neck. Methods: Recently, AXB based on Linear Boltzmann Transport Equation has been installed in the Eclipse treatment planning system (Varian Medical Oncology System, USA). The dose calculation accuracy of AXB11 was tested by the EGSnrc-MC calculations. In additions, AXB version 10 (AXB10) and Analytical Anisotropic Algorithm (AAA) were also used. First the accuracy of an inhomogeneity correction for AXB and AAA algorithms was evaluated by comparing with MC-calculated dose distributions for a virtual inhomogeneity phantom that includes water, bone, air, adipose, muscle, and aluminum. Next the IMRT dose distributions for head and neck were compared with the AXB and AAA algorithms and MC by means of dose volume histograms and three dimensional gamma analysis for each structure (CTV, OAR, etc.). Results: For dose distributions with the virtual inhomogeneity phantom, AXB was in good agreement with those of MC, except the dose in air region. The dose in air region decreased in order of MCalgorithms, ie: 0.700 MeV for MC, 0.711 MeV for AXB11, and 1.011 MeV for AXB 10. Since the AAA algorithm is based on the dose kernel of water, the doses in regions for air, bone, and aluminum considerably became higher than those of AXB and MC. The pass rates of the gamma analysis for IMRT dose distributions in head and neck were similar to those of MC in order of AXB11dose calculation accuracy of AXB11 was almost equivalent to the MC dose calculation.

  2. [Pharmacogenetic algorithms for predicting the appropriate dose of vitamin K antagonists: are they still useful?].

    PubMed

    Mundi, Santa; Distante, Alessandro; De Caterina, Raffaele

    2014-10-01

    The severity of side effects that may occur with vitamin K antagonists due to their narrow therapeutic window requires great attention in finding out the most appropriate dose for these drugs. Pharmacogenetic research has now considerably helped clarifying the relationships between genetic variants and sensitivity to such therapy, paving the ground to predictive algorithms that include clinical and genetic variables to establish the best doses to start and maintain an adequate anticoagulation. Pharmacogenetic algorithms indeed aim at identifying tailored regimens, reducing adverse drug reactions and subsequent hospitalizations, optimizing therapeutic efficacy and containing costs. Here we describe the results so far achieved in pharmacogenetic research with vitamin K antagonists, analyzing studies that have assessed the usefulness of such algorithms. PMID:25424019

  3. SU-E-T-202: Impact of Monte Carlo Dose Calculation Algorithm On Prostate SBRT Treatments

    SciTech Connect

    Venencia, C; Garrigo, E; Cardenas, J; Castro Pena, P

    2014-06-01

    Purpose: The purpose of this work was to quantify the dosimetric impact of using Monte Carlo algorithm on pre calculated SBRT prostate treatment with pencil beam dose calculation algorithm. Methods: A 6MV photon beam produced by a Novalis TX (BrainLAB-Varian) linear accelerator equipped with HDMLC was used. Treatment plans were done using 9 fields with Iplanv4.5 (BrainLAB) and dynamic IMRT modality. Institutional SBRT protocol uses a total dose to the prostate of 40Gy in 5 fractions, every other day. Dose calculation is done by pencil beam (2mm dose resolution), heterogeneity correction and dose volume constraint (UCLA) for PTV D95%=40Gy and D98%>39.2Gy, Rectum V20Gy<50%, V32Gy<20%, V36Gy<10% and V40Gy<5%, Bladder V20Gy<40% and V40Gy<10%, femoral heads V16Gy<5%, penile bulb V25Gy<3cc, urethra and overlap region between PTV and PRV Rectum Dmax<42Gy. 10 SBRT treatments plans were selected and recalculated using Monte Carlo with 2mm spatial resolution and mean variance of 2%. DVH comparisons between plans were done. Results: The average difference between PTV doses constraints were within 2%. However 3 plans have differences higher than 3% which does not meet the D98% criteria (>39.2Gy) and should have been renormalized. Dose volume constraint differences for rectum, bladder, femoral heads and penile bulb were les than 2% and within tolerances. Urethra region and overlapping between PTV and PRV Rectum shows increment of dose in all plans. The average difference for urethra region was 2.1% with a maximum of 7.8% and for the overlapping region 2.5% with a maximum of 8.7%. Conclusion: Monte Carlo dose calculation on dynamic IMRT treatments could affects on plan normalization. Dose increment in critical region of urethra and PTV overlapping region with PTV could have clinical consequences which need to be studied. The use of Monte Carlo dose calculation algorithm is limited because inverse planning dose optimization use only pencil beam.

  4. Comparison of dose calculation algorithms in slab phantoms with cortical bone equivalent heterogeneities

    SciTech Connect

    Carrasco, P.; Jornet, N.; Duch, M. A.; Panettieri, V.; Weber, L.; Eudaldo, T.; Ginjaume, M.; Ribas, M.

    2007-08-15

    To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10x10, 5x5, and 2x2 cm{sup 2}) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2x2 cm{sup 2} field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values

  5. An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography

    NASA Astrophysics Data System (ADS)

    Treiber, O.; Wanninger, F.; Führ, H.; Panzer, W.; Regulla, D.; Winkler, G.

    2003-02-01

    This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing, a dose reduction by 25% has no serious influence on the detection results, whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography.

  6. Dosimetric impact of Acuros XB deterministic radiation transport algorithm for heterogeneous dose calculation in lung cancer

    SciTech Connect

    Han Tao; Followill, David; Repchak, Roman; Molineu, Andrea; Howell, Rebecca; Salehpour, Mohammad; Mikell, Justin; Mourtada, Firas

    2013-05-15

    Purpose: The novel deterministic radiation transport algorithm, Acuros XB (AXB), has shown great potential for accurate heterogeneous dose calculation. However, the clinical impact between AXB and other currently used algorithms still needs to be elucidated for translation between these algorithms. The purpose of this study was to investigate the impact of AXB for heterogeneous dose calculation in lung cancer for intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Methods: The thorax phantom from the Radiological Physics Center (RPC) was used for this study. IMRT and VMAT plans were created for the phantom in the Eclipse 11.0 treatment planning system. Each plan was delivered to the phantom three times using a Varian Clinac iX linear accelerator to ensure reproducibility. Thermoluminescent dosimeters (TLDs) and Gafchromic EBT2 film were placed inside the phantom to measure delivered doses. The measurements were compared with dose calculations from AXB 11.0.21 and the anisotropic analytical algorithm (AAA) 11.0.21. Two dose reporting modes of AXB, dose-to-medium in medium (D{sub m,m}) and dose-to-water in medium (D{sub w,m}), were studied. Point doses, dose profiles, and gamma analysis were used to quantify the agreement between measurements and calculations from both AXB and AAA. The computation times for AAA and AXB were also evaluated. Results: For the RPC lung phantom, AAA and AXB dose predictions were found in good agreement to TLD and film measurements for both IMRT and VMAT plans. TLD dose predictions were within 0.4%-4.4% to AXB doses (both D{sub m,m} and D{sub w,m}); and within 2.5%-6.4% to AAA doses, respectively. For the film comparisons, the gamma indexes ({+-}3%/3 mm criteria) were 94%, 97%, and 98% for AAA, AXB{sub Dm,m}, and AXB{sub Dw,m}, respectively. The differences between AXB and AAA in dose-volume histogram mean doses were within 2% in the planning target volume, lung, heart, and within 5% in the spinal cord

  7. A comparison between anisotropic analytical and multigrid superposition dose calculation algorithms in radiotherapy treatment planning

    SciTech Connect

    Wu, Vincent W.C.; Tse, Teddy K.H.; Ho, Cola L.M.; Yeung, Eric C.Y.

    2013-07-01

    Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time.

  8. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy

    NASA Astrophysics Data System (ADS)

    Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.

    2014-08-01

    The purpose of this study was to assess the possibility of introducing site-specific range margins to replace current generic margins in proton therapy. Further, the goal was to study the potential of reducing margins with current analytical dose calculations methods. For this purpose we investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo (MC) simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for seven disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head and neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and MC algorithms to obtain the average range differences and root mean square deviation for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing MC dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head and neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be

  9. Dosimetric impact of Acuros XB deterministic radiation transport algorithm for heterogeneous dose calculation in lung cancer

    PubMed Central

    Han, Tao; Followill, David; Mikell, Justin; Repchak, Roman; Molineu, Andrea; Howell, Rebecca; Salehpour, Mohammad; Mourtada, Firas

    2013-01-01

    Purpose: The novel deterministic radiation transport algorithm, Acuros XB (AXB), has shown great potential for accurate heterogeneous dose calculation. However, the clinical impact between AXB and other currently used algorithms still needs to be elucidated for translation between these algorithms. The purpose of this study was to investigate the impact of AXB for heterogeneous dose calculation in lung cancer for intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Methods: The thorax phantom from the Radiological Physics Center (RPC) was used for this study. IMRT and VMAT plans were created for the phantom in the Eclipse 11.0 treatment planning system. Each plan was delivered to the phantom three times using a Varian Clinac iX linear accelerator to ensure reproducibility. Thermoluminescent dosimeters (TLDs) and Gafchromic EBT2 film were placed inside the phantom to measure delivered doses. The measurements were compared with dose calculations from AXB 11.0.21 and the anisotropic analytical algorithm (AAA) 11.0.21. Two dose reporting modes of AXB, dose-to-medium in medium (Dm,m) and dose-to-water in medium (Dw,m), were studied. Point doses, dose profiles, and gamma analysis were used to quantify the agreement between measurements and calculations from both AXB and AAA. The computation times for AAA and AXB were also evaluated. Results: For the RPC lung phantom, AAA and AXB dose predictions were found in good agreement to TLD and film measurements for both IMRT and VMAT plans. TLD dose predictions were within 0.4%–4.4% to AXB doses (both Dm,m and Dw,m); and within 2.5%–6.4% to AAA doses, respectively. For the film comparisons, the gamma indexes (±3%/3 mm criteria) were 94%, 97%, and 98% for AAA, AXB_Dm,m, and AXB_Dw,m, respectively. The differences between AXB and AAA in dose–volume histogram mean doses were within 2% in the planning target volume, lung, heart, and within 5% in the spinal cord. However, differences up to 8

  10. Development of a deformable dosimetric phantom to verify dose accumulation algorithms for adaptive radiotherapy

    PubMed Central

    Zhong, Hualiang; Adams, Jeffrey; Glide-Hurst, Carri; Zhang, Hualin; Li, Haisen; Chetty, Indrin J.

    2016-01-01

    Adaptive radiotherapy may improve treatment outcomes for lung cancer patients. Because of the lack of an effective tool for quality assurance, this therapeutic modality is not yet accepted in clinic. The purpose of this study is to develop a deformable physical phantom for validation of dose accumulation algorithms in regions with heterogeneous mass. A three-dimensional (3D) deformable phantom was developed containing a tissue-equivalent tumor and heterogeneous sponge inserts. Thermoluminescent dosimeters (TLDs) were placed at multiple locations in the phantom each time before dose measurement. Doses were measured with the phantom in both the static and deformed cases. The deformation of the phantom was actuated by a motor driven piston. 4D computed tomography images were acquired to calculate 3D doses at each phase using Pinnacle and EGSnrc/DOSXYZnrc. These images were registered using two registration software packages: VelocityAI and Elastix. With the resultant displacement vector fields (DVFs), the calculated 3D doses were accumulated using a mass-and energy congruent mapping method and compared to those measured by the TLDs at four typical locations. In the static case, TLD measurements agreed with all the algorithms by 1.8% at the center of the tumor volume and by 4.0% in the penumbra. In the deformable case, the phantom's deformation was reproduced within 1.1 mm. For the 3D dose calculated by Pinnacle, the total dose accumulated with the Elastix DVF agreed well to the TLD measurements with their differences <2.5% at four measured locations. When the VelocityAI DVF was used, their difference increased up to 11.8%. For the 3D dose calculated by EGSnrc/DOSXYZnrc, the total doses accumulated with the two DVFs were within 5.7% of the TLD measurements which are slightly over the rate of 5% for clinical acceptance. The detector-embedded deformable phantom allows radiation dose to be measured in a dynamic environment, similar to deforming lung tissues, supporting

  11. Development of a deformable dosimetric phantom to verify dose accumulation algorithms for adaptive radiotherapy.

    PubMed

    Zhong, Hualiang; Adams, Jeffrey; Glide-Hurst, Carri; Zhang, Hualin; Li, Haisen; Chetty, Indrin J

    2016-01-01

    Adaptive radiotherapy may improve treatment outcomes for lung cancer patients. Because of the lack of an effective tool for quality assurance, this therapeutic modality is not yet accepted in clinic. The purpose of this study is to develop a deformable physical phantom for validation of dose accumulation algorithms in regions with heterogeneous mass. A three-dimensional (3D) deformable phantom was developed containing a tissue-equivalent tumor and heterogeneous sponge inserts. Thermoluminescent dosimeters (TLDs) were placed at multiple locations in the phantom each time before dose measurement. Doses were measured with the phantom in both the static and deformed cases. The deformation of the phantom was actuated by a motor driven piston. 4D computed tomography images were acquired to calculate 3D doses at each phase using Pinnacle and EGSnrc/DOSXYZnrc. These images were registered using two registration software packages: VelocityAI and Elastix. With the resultant displacement vector fields (DVFs), the calculated 3D doses were accumulated using a mass-and energy congruent mapping method and compared to those measured by the TLDs at four typical locations. In the static case, TLD measurements agreed with all the algorithms by 1.8% at the center of the tumor volume and by 4.0% in the penumbra. In the deformable case, the phantom's deformation was reproduced within 1.1 mm. For the 3D dose calculated by Pinnacle, the total dose accumulated with the Elastix DVF agreed well to the TLD measurements with their differences <2.5% at four measured locations. When the VelocityAI DVF was used, their difference increased up to 11.8%. For the 3D dose calculated by EGSnrc/DOSXYZnrc, the total doses accumulated with the two DVFs were within 5.7% of the TLD measurements which are slightly over the rate of 5% for clinical acceptance. The detector-embedded deformable phantom allows radiation dose to be measured in a dynamic environment, similar to deforming lung tissues, supporting

  12. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy

    PubMed Central

    Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.

    2014-01-01

    The purpose of this study was to investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for 7 disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head & neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and Monte Carlo algorithms to obtain the average range differences (ARD) and root mean square deviation (RMSD) for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation (ADD) of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing Monte Carlo dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head & neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be needed for breast, lung and head & neck treatments. We conclude that currently used generic range uncertainty margins in proton therapy should be redefined site specific and that complex geometries may require a field specific

  13. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures

    SciTech Connect

    Neylon, J. Sheng, K.; Yu, V.; Low, D. A.; Kupelian, P.; Santhanam, A.; Chen, Q.

    2014-10-15

    Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria

  14. MO-E-17A-05: Individualized Patient Dosimetry in CT Using the Patient Dose (PATDOSE) Algorithm

    SciTech Connect

    Hernandez, A; Boone, J

    2014-06-15

    Purpose: Radiation dose to the patient undergoing a CT examination has been the focus of many recent studies. While CTDIvol and SSDE-based methods are important tools for patient dose management, the CT image data provides important information with respect to CT dose and its distribution. Coupled with the known geometry and output factors (kV, mAs, pitch, etc.) of the CT scanner, the CT dataset can be used directly for computing absorbed dose. Methods: The HU numbers in a patient's CT data set can be converted to linear attenuation coefficients (LACs) with some assumptions. With this (PAT-DOSE) method, which is not Monte Carlo-based, the primary and scatter dose are computed separately. The primary dose is computed directly from the geometry of the scanner, x-ray spectrum, and the known patient LACs. Once the primary dose has been computed to all voxels in the patient, the scatter dose algorithm redistributes a fraction of the absorbed primary dose (based on the HU number of each source voxel), and the methods here invoke both tissue attenuation and absorption and solid angle geometry. The scatter dose algorithm can be run N times to include Nth-scatter redistribution. PAT-DOSE was deployed using simple PMMA phantoms, to validate its performance against Monte Carlo-derived dose distributions. Results: Comparison between PAT-DOSE and MCNPX primary dose distributions showed excellent agreement for several scan lengths. The 1st-scatter dose distributions showed relatively higher-amplitude, long-range scatter tails for the PAT-DOSE algorithm then for MCNPX simulations. Conclusion: The PAT-DOSE algorithm provides a fast, deterministic assessment of the 3-D dose distribution in CT, making use of scanner geometry and the patient image data set. The preliminary implementation of the algorithm produces accurate primary dose distributions however achieving scatter distribution agreement is more challenging. Addressing the polyenergetic x-ray spectrum and spatially dependent

  15. Dose algorithm for EXTRAD 4100S extremity dosimeter for use at Sandia National Laboratories.

    SciTech Connect

    Potter, Charles Augustus

    2011-05-01

    An updated algorithm for the EXTRAD 4100S extremity dosimeter has been derived. This algorithm optimizes the binning of dosimeter element ratios and uses a quadratic function to determine the response factors for low response ratios. This results in lower systematic bias across all test categories and eliminates the need for the 'red strap' algorithm that was used for high energy beta/gamma emitting radionuclides. The Radiation Protection Dosimetry Program (RPDP) at Sandia National Laboratories uses the Thermo Fisher EXTRAD 4100S extremity dosimeter, shown in Fig 1.1 to determine shallow dose to the extremities of potentially exposed individuals. This dosimeter consists of two LiF TLD elements or 'chipstrates', one of TLD-700 ({sup 7}Li) and one of TLD-100 (natural Li) separated by a tin filter. Following readout and background subtraction, the ratio of the responses of the two elements is determined defining the penetrability of the incident radiation. While this penetrability approximates the incident energy of the radiation, X-rays and beta particles exist in energy distributions that make determination of dose conversion factors less straightforward in their determination.

  16. Dosimetric validation of the Acuros XB Advanced Dose Calculation algorithm: fundamental characterization in water

    NASA Astrophysics Data System (ADS)

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Mancosu, Pietro; Cozzi, Luca

    2011-05-01

    This corrigendum intends to clarify some important points that were not clearly or properly addressed in the original paper, and for which the authors apologize. The original description of the first Acuros algorithm is from the developers, published in Physics in Medicine and Biology by Vassiliev et al (2010) in the paper entitled 'Validation of a new grid-based Boltzmann equation solver for dose calculation in radiotherapy with photon beams'. The main equations describing the algorithm reported in our paper, implemented as the 'Acuros XB Advanced Dose Calculation Algorithm' in the Varian Eclipse treatment planning system, were originally described (for the original Acuros algorithm) in the above mentioned paper by Vassiliev et al. The intention of our description in our paper was to give readers an overview of the algorithm, not pretending to have authorship of the algorithm itself (used as implemented in the planning system). Unfortunately our paper was not clear, particularly in not allocating full credit to the work published by Vassiliev et al on the original Acuros algorithm. Moreover, it is important to clarify that we have not adapted any existing algorithm, but have used the Acuros XB implementation in the Eclipse planning system from Varian. In particular, the original text of our paper should have been as follows: On page 1880 the sentence 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008, 2010). Acuros XB builds upon many of the methods in Attila, but represents a ground-up rewrite of the solver where the methods were adapted especially for external photon beam dose calculations' should be corrected to 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008). A new algorithm called Acuros, developed by the Transpire Inc. group, was

  17. SU-E-T-164: Evaluation of Electron Dose Distribution Using Two Algorithms

    SciTech Connect

    Liu, D; Li, Z; Shang, K; Jing, Z; Wang, J; Miao, M; Yang, J

    2014-06-01

    Purpose: To appreciate the difference of electron dose distributions calculated from the Monte Carlo and Electron 3D algorithms of radiotherapy in a heterogeneous phantom. Methods: A phantom consisted of two different materials (lungs mimicked by low-density cork and others by polystyrene) with an 11x16 cm field size (SSD = 100 cm) was utilized to estimate the two-dimensional dose distributions under 6 and 18 MeV beams. On behalf of two different types of tissue, the heterogeneous phantom was comprised of 3 identical slabs in the longitudinal direction with a thickness of 1 cm for each slab and 2 with a thickness of 2.5 cm. The Monte Carlo/MCTP application package constituted of five codes was performed to simulate the electron beams of a Varian Clinac 23IX. A 20x20 cm2 type III (open walled) applicator was used in these simulations. It has been shown elsewhere that the agreement of the phase space data between the calculation results of MCTP application package and the measured data were within 2% on depth-dose and transverse profiles, as well as output factor calculations. The electron 3D algorithm owned by Pinnacle 8.0m and the MCTP application package were applied for the two-dimensional dose distributions calculation. The curves at 50% and 100%-prescribed dose were observed for 6 and 18 MeV beams, respectively. Results: The MC calculations results were compared with the electron 3D calculations in terms of two-dimensional dose distributions for 6 and 18 MeV beams showed excellent agreement except in distal boundary where it was the very junction of the high and low-density region. Conclusions: The Monte Carlo/MCTP method could be used to better reflect the dose variation caused by heterogeneous tissues. Conclusion: A case study showed that the Monte Carlo/MCTP method could be used to better reflect the dose variation caused by heterogeneous tissues.

  18. Median prior constrained TV algorithm for sparse view low-dose CT reconstruction.

    PubMed

    Liu, Yi; Shangguan, Hong; Zhang, Quan; Zhu, Hongqing; Shu, Huazhong; Gui, Zhiguo

    2015-05-01

    It is known that lowering the X-ray tube current (mAs) or tube voltage (kVp) and simultaneously reducing the total number of X-ray views (sparse view) is an effective means to achieve low-dose in computed tomography (CT) scan. However, the associated image quality by the conventional filtered back-projection (FBP) usually degrades due to the excessive quantum noise. Although sparse-view CT reconstruction algorithm via total variation (TV), in the scanning protocol of reducing X-ray tube current, has been demonstrated to be able to result in significant radiation dose reduction while maintain image quality, noticeable patchy artifacts still exist in reconstructed images. In this study, to address the problem of patchy artifacts, we proposed a median prior constrained TV regularization to retain the image quality by introducing an auxiliary vector m in register with the object. Specifically, the approximate action of m is to draw, in each iteration, an object voxel toward its own local median, aiming to improve low-dose image quality with sparse-view projection measurements. Subsequently, an alternating optimization algorithm is adopted to optimize the associative objective function. We refer to the median prior constrained TV regularization as "TV_MP" for simplicity. Experimental results on digital phantoms and clinical phantom demonstrated that the proposed TV_MP with appropriate control parameters can not only ensure a higher signal to noise ratio (SNR) of the reconstructed image, but also its resolution compared with the original TV method.

  19. Development of a pharmacogenetic-guided warfarin dosing algorithm for Puerto Rican patients

    PubMed Central

    Ramos, Alga S; Seip, Richard L; Rivera-Miranda, Giselle; Felici-Giovanini, Marcos E; Garcia-Berdecia, Rafael; Alejandro-Cowan, Yirelia; Kocherla, Mohan; Cruz, Iadelisse; Feliu, Juan F; Cadilla, Carmen L; Renta, Jessica Y; Gorowski, Krystyna; Vergara, Cunegundo; Ruaño, Gualberto; Duconge, Jorge

    2012-01-01

    Aim This study was aimed at developing a pharmacogenetic-driven warfarin-dosing algorithm in 163 admixed Puerto Rican patients on stable warfarin therapy. Patients & methods A multiple linear-regression analysis was performed using log-transformed effective warfarin dose as the dependent variable, and combining CYP2C9 and VKORC1 genotyping with other relevant nongenetic clinical and demographic factors as independent predictors. Results The model explained more than two-thirds of the observed variance in the warfarin dose among Puerto Ricans, and also produced significantly better ‘ideal dose’ estimates than two pharmacogenetic models and clinical algorithms published previously, with the greatest benefit seen in patients ultimately requiring <7 mg/day. We also assessed the clinical validity of the model using an independent validation cohort of 55 Puerto Rican patients from Hartford, CT, USA (R2 = 51%). Conclusion Our findings provide the basis for planning prospective pharmacogenetic studies to demonstrate the clinical utility of genotyping warfarin-treated Puerto Rican patients. PMID:23215886

  20. Adaptation of the CVT algorithm for catheter optimization in high dose rate brachytherapy

    SciTech Connect

    Poulin, Eric; Fekete, Charles-Antoine Collins; Beaulieu, Luc; Létourneau, Mélanie; Fenster, Aaron; Pouliot, Jean

    2013-11-15

    Purpose: An innovative, simple, and fast method to optimize the number and position of catheters is presented for prostate and breast high dose rate (HDR) brachytherapy, both for arbitrary templates or template-free implants (such as robotic templates).Methods: Eight clinical cases were chosen randomly from a bank of patients, previously treated in our clinic to test our method. The 2D Centroidal Voronoi Tessellations (CVT) algorithm was adapted to distribute catheters uniformly in space, within the maximum external contour of the planning target volume. The catheters optimization procedure includes the inverse planning simulated annealing algorithm (IPSA). Complete treatment plans can then be generated from the algorithm for different number of catheters. The best plan is chosen from different dosimetry criteria and will automatically provide the number of catheters and their positions. After the CVT algorithm parameters were optimized for speed and dosimetric results, it was validated against prostate clinical cases, using clinically relevant dose parameters. The robustness to implantation error was also evaluated. Finally, the efficiency of the method was tested in breast interstitial HDR brachytherapy cases.Results: The effect of the number and locations of the catheters on prostate cancer patients was studied. Treatment plans with a better or equivalent dose distributions could be obtained with fewer catheters. A better or equal prostate V100 was obtained down to 12 catheters. Plans with nine or less catheters would not be clinically acceptable in terms of prostate V100 and D90. Implantation errors up to 3 mm were acceptable since no statistical difference was found when compared to 0 mm error (p > 0.05). No significant difference in dosimetric indices was observed for the different combination of parameters within the CVT algorithm. A linear relation was found between the number of random points and the optimization time of the CVT algorithm. Because the

  1. Verification of Pharmacogenetics-Based Warfarin Dosing Algorithms in Han-Chinese Patients Undertaking Mechanic Heart Valve Replacement

    PubMed Central

    Zhao, Li; Chen, Chunxia; Li, Bei; Dong, Li; Guo, Yingqiang; Xiao, Xijun; Zhang, Eryong; Qin, Li

    2014-01-01

    Objective To study the performance of pharmacogenetics-based warfarin dosing algorithms in the initial and the stable warfarin treatment phases in a cohort of Han-Chinese patients undertaking mechanic heart valve replacement. Methods We searched PubMed, Chinese National Knowledge Infrastructure and Wanfang databases for selecting pharmacogenetics-based warfarin dosing models. Patients with mechanic heart valve replacement were consecutively recruited between March 2012 and July 2012. The predicted warfarin dose of each patient was calculated and compared with the observed initial and stable warfarin doses. The percentage of patients whose predicted dose fell within 20% of their actual therapeutic dose (percentage within 20%), and the mean absolute error (MAE) were utilized to evaluate the predictive accuracy of all the selected algorithms. Results A total of 8 algorithms including Du, Huang, Miao, Wei, Zhang, Lou, Gage, and International Warfarin Pharmacogenetics Consortium (IWPC) model, were tested in 181 patients. The MAE of the Gage, IWPC and 6 Han-Chinese pharmacogenetics-based warfarin dosing algorithms was less than 0.6 mg/day in accuracy and the percentage within 20% exceeded 45% in all of the selected models in both the initial and the stable treatment stages. When patients were stratified according to the warfarin dose range, all of the equations demonstrated better performance in the ideal-dose range (1.88–4.38 mg/day) than the low-dose range (<1.88 mg/day). Among the 8 algorithms compared, the algorithms of Wei, Huang, and Miao showed a lower MAE and higher percentage within 20% in both the initial and the stable warfarin dose prediction and in the low-dose and the ideal-dose ranges. Conclusions All of the selected pharmacogenetics-based warfarin dosing regimens performed similarly in our cohort. However, the algorithms of Wei, Huang, and Miao showed a better potential for warfarin prediction in the initial and the stable treatment phases in Han

  2. Development and verification of an analytical algorithm to predict absorbed dose distributions in ocular proton therapy using Monte Carlo simulations.

    PubMed

    Koch, Nicholas C; Newhauser, Wayne D

    2010-02-01

    Proton beam radiotherapy is an effective and non-invasive treatment for uveal melanoma. Recent research efforts have focused on improving the dosimetric accuracy of treatment planning and overcoming the present limitation of relative analytical dose calculations. Monte Carlo algorithms have been shown to accurately predict dose per monitor unit (D/MU) values, but this has yet to be shown for analytical algorithms dedicated to ocular proton therapy, which are typically less computationally expensive than Monte Carlo algorithms. The objective of this study was to determine if an analytical method could predict absolute dose distributions and D/MU values for a variety of treatment fields like those used in ocular proton therapy. To accomplish this objective, we used a previously validated Monte Carlo model of an ocular nozzle to develop an analytical algorithm to predict three-dimensional distributions of D/MU values from pristine Bragg peaks and therapeutically useful spread-out Bragg peaks (SOBPs). Results demonstrated generally good agreement between the analytical and Monte Carlo absolute dose calculations. While agreement in the proximal region decreased for beams with less penetrating Bragg peaks compared with the open-beam condition, the difference was shown to be largely attributable to edge-scattered protons. A method for including this effect in any future analytical algorithm was proposed. Comparisons of D/MU values showed typical agreement to within 0.5%. We conclude that analytical algorithms can be employed to accurately predict absolute proton dose distributions delivered by an ocular nozzle.

  3. Development of a dose algorithm for the modified panasonic UD-802 personal dosimeter used at three mile island

    SciTech Connect

    Miklos, J. A.; Plato, P.

    1988-01-01

    During the fall of 1981, the personnel dosimetry group at GPU Nuclear Corporation at Three Mile Island (TMI) requested assistance from The University of Michigan (UM) in developing a dose algorithm for use at TMI-2. The dose algorithm had to satisfy the specific needs of TMI-2, particularly the need to distinguish beta-particle emitters of different energies, as well as having the capability of satisfying the requirements of the American National Standards Institute (ANSI) N13.11-1983 standard. A standard Panasonic UD-802 dosimeter was modified by having the plastic filter over element 2 removed. The dosimeter and hanger consists of the elements with a 14 mg/cm/sup 2/ density thickness and the filtrations shown. The hanger on this dosimeter had a double open window to facilitate monitoring for low-energy beta particles. The dose algorithm was written to satisfy the requirements of the ANSI N13.11-1983 standard, to include /sup 204/Tl with mixtures of /sup 204/Tl with /sup 90/Sr//sup 90/Y and /sup 137/Cs, and to include 81- and 200-keV average energy X-ray spectra. Stress tests were conducted to observe the algorithm performance to low doses, temperature, humidity, and the residual response following high-dose irradiations. The ability of the algorithm to determine dose from the beta particles of /sup 147/Pm was also investigated.

  4. Toward adaptive radiotherapy for head and neck patients: Uncertainties in dose warping due to the choice of deformable registration algorithm

    SciTech Connect

    Veiga, Catarina Royle, Gary; Lourenço, Ana Mónica; Mouinuddin, Syed; Herk, Marcel van; Modat, Marc; Ourselin, Sébastien; McClelland, Jamie R.

    2015-02-15

    Purpose: The aims of this work were to evaluate the performance of several deformable image registration (DIR) algorithms implemented in our in-house software (NiftyReg) and the uncertainties inherent to using different algorithms for dose warping. Methods: The authors describe a DIR based adaptive radiotherapy workflow, using CT and cone-beam CT (CBCT) imaging. The transformations that mapped the anatomy between the two time points were obtained using four different DIR approaches available in NiftyReg. These included a standard unidirectional algorithm and more sophisticated bidirectional ones that encourage or ensure inverse consistency. The forward (CT-to-CBCT) deformation vector fields (DVFs) were used to propagate the CT Hounsfield units and structures to the daily geometry for “dose of the day” calculations, while the backward (CBCT-to-CT) DVFs were used to remap the dose of the day onto the planning CT (pCT). Data from five head and neck patients were used to evaluate the performance of each implementation based on geometrical matching, physical properties of the DVFs, and similarity between warped dose distributions. Geometrical matching was verified in terms of dice similarity coefficient (DSC), distance transform, false positives, and false negatives. The physical properties of the DVFs were assessed calculating the harmonic energy, determinant of the Jacobian, and inverse consistency error of the transformations. Dose distributions were displayed on the pCT dose space and compared using dose difference (DD), distance to dose difference, and dose volume histograms. Results: All the DIR algorithms gave similar results in terms of geometrical matching, with an average DSC of 0.85 ± 0.08, but the underlying properties of the DVFs varied in terms of smoothness and inverse consistency. When comparing the doses warped by different algorithms, we found a root mean square DD of 1.9% ± 0.8% of the prescribed dose (pD) and that an average of 9% ± 4% of

  5. Characterization of adaptive statistical iterative reconstruction algorithm for dose reduction in CT: A pediatric oncology perspective

    SciTech Connect

    Brady, S. L.; Yee, B. S.; Kaufman, R. A.

    2012-09-15

    Purpose: This study demonstrates a means of implementing an adaptive statistical iterative reconstruction (ASiR Trade-Mark-Sign ) technique for dose reduction in computed tomography (CT) while maintaining similar noise levels in the reconstructed image. The effects of image quality and noise texture were assessed at all implementation levels of ASiR Trade-Mark-Sign . Empirically derived dose reduction limits were established for ASiR Trade-Mark-Sign for imaging of the trunk for a pediatric oncology population ranging from 1 yr old through adolescence/adulthood. Methods: Image quality was assessed using metrics established by the American College of Radiology (ACR) CT accreditation program. Each image quality metric was tested using the ACR CT phantom with 0%-100% ASiR Trade-Mark-Sign blended with filtered back projection (FBP) reconstructed images. Additionally, the noise power spectrum (NPS) was calculated for three common reconstruction filters of the trunk. The empirically derived limitations on ASiR Trade-Mark-Sign implementation for dose reduction were assessed using (1, 5, 10) yr old and adolescent/adult anthropomorphic phantoms. To assess dose reduction limits, the phantoms were scanned in increments of increased noise index (decrementing mA using automatic tube current modulation) balanced with ASiR Trade-Mark-Sign reconstruction to maintain noise equivalence of the 0% ASiR Trade-Mark-Sign image. Results: The ASiR Trade-Mark-Sign algorithm did not produce any unfavorable effects on image quality as assessed by ACR criteria. Conversely, low-contrast resolution was found to improve due to the reduction of noise in the reconstructed images. NPS calculations demonstrated that images with lower frequency noise had lower noise variance and coarser graininess at progressively higher percentages of ASiR Trade-Mark-Sign reconstruction; and in spite of the similar magnitudes of noise, the image reconstructed with 50% or more ASiR Trade-Mark-Sign presented a more

  6. Differences in dose-volumetric data between the analytical anisotropic algorithm and the x-ray voxel Monte Carlo algorithm in stereotactic body radiation therapy for lung cancer

    SciTech Connect

    Mampuya, Wambaka Ange; Matsuo, Yukinori; Nakamura, Akira; Nakamura, Mitsuhiro; Mukumoto, Nobutaka; Miyabe, Yuki; Narabayashi, Masaru; Sakanaka, Katsuyuki; Mizowaki, Takashi; Hiraoka, Masahiro

    2013-04-01

    The objective of this study was to evaluate the differences in dose-volumetric data obtained using the analytical anisotropic algorithm (AAA) vs the x-ray voxel Monte Carlo (XVMC) algorithm for stereotactic body radiation therapy (SBRT) for lung cancer. Dose-volumetric data from 20 patients treated with SBRT for solitary lung cancer generated using the iPlan XVMC for the Novalis system consisting of a 6-MV linear accelerator and micro-multileaf collimators were recalculated with the AAA in Eclipse using the same monitor units and identical beam setup. The mean isocenter dose was 100.2% and 98.7% of the prescribed dose according to XVMC and AAA, respectively. Mean values of the maximal dose (D{sub max}), the minimal dose (D{sub min}), and dose received by 95% volume (D{sub 95}) for the planning target volume (PTV) with XVMC were 104.3%, 75.1%, and 86.2%, respectively. When recalculated with the AAA, those values were 100.8%, 77.1%, and 85.4%, respectively. Mean dose parameter values considered for the normal lung, namely the mean lung dose, V{sub 5}, and V{sub 20}, were 3.7 Gy, 19.4%, and 5.0% for XVMC and 3.6 Gy, 18.3%, and 4.7% for the AAA, respectively. All of these dose-volumetric differences between the 2 algorithms were within 5% of the prescribed dose. The effect of PTV size and tumor location, respectively, on the differences in dose parameters for the PTV between the AAA and XVMC was evaluated. A significant effect of the PTV on the difference in D{sub 95} between the AAA and XVMC was observed (p = 0.03). Differences in the marginal doses, namely D{sub min} and D{sub 95}, were statistically significant between peripherally and centrally located tumors (p = 0.04 and p = 0.02, respectively). Tumor location and volume might have an effect on the differences in dose-volumetric parameters. The differences between AAA and XVMC were considered to be within an acceptable range (<5 percentage points)

  7. A generalized 2D pencil beam scaling algorithm for proton dose calculation in heterogeneous slab geometries

    SciTech Connect

    Westerly, David C.; Mo Xiaohu; DeLuca, Paul M. Jr.; Tome, Wolfgang A.; Mackie, Thomas R.

    2013-06-15

    Purpose: Pencil beam algorithms are commonly used for proton therapy dose calculations. Szymanowski and Oelfke ['Two-dimensional pencil beam scaling: An improved proton dose algorithm for heterogeneous media,' Phys. Med. Biol. 47, 3313-3330 (2002)] developed a two-dimensional (2D) scaling algorithm which accurately models the radial pencil beam width as a function of depth in heterogeneous slab geometries using a scaled expression for the radial kernel width in water as a function of depth and kinetic energy. However, an assumption made in the derivation of the technique limits its range of validity to cases where the input expression for the radial kernel width in water is derived from a local scattering power model. The goal of this work is to derive a generalized form of 2D pencil beam scaling that is independent of the scattering power model and appropriate for use with any expression for the radial kernel width in water as a function of depth. Methods: Using Fermi-Eyges transport theory, the authors derive an expression for the radial pencil beam width in heterogeneous slab geometries which is independent of the proton scattering power and related quantities. The authors then perform test calculations in homogeneous and heterogeneous slab phantoms using both the original 2D scaling model and the new model with expressions for the radial kernel width in water computed from both local and nonlocal scattering power models, as well as a nonlocal parameterization of Moliere scattering theory. In addition to kernel width calculations, dose calculations are also performed for a narrow Gaussian proton beam. Results: Pencil beam width calculations indicate that both 2D scaling formalisms perform well when the radial kernel width in water is derived from a local scattering power model. Computing the radial kernel width from a nonlocal scattering model results in the local 2D scaling formula under-predicting the pencil beam width by as much as 1.4 mm (21%) at the depth

  8. A generalized 2D pencil beam scaling algorithm for proton dose calculation in heterogeneous slab geometries

    PubMed Central

    Westerly, David C.; Mo, Xiaohu; Tomé, Wolfgang A.; Mackie, Thomas R.; DeLuca, Paul M.

    2013-01-01

    Purpose: Pencil beam algorithms are commonly used for proton therapy dose calculations. Szymanowski and Oelfke [“Two-dimensional pencil beam scaling: An improved proton dose algorithm for heterogeneous media,” Phys. Med. Biol. 47, 3313–3330 (2002)10.1088/0031-9155/47/18/304] developed a two-dimensional (2D) scaling algorithm which accurately models the radial pencil beam width as a function of depth in heterogeneous slab geometries using a scaled expression for the radial kernel width in water as a function of depth and kinetic energy. However, an assumption made in the derivation of the technique limits its range of validity to cases where the input expression for the radial kernel width in water is derived from a local scattering power model. The goal of this work is to derive a generalized form of 2D pencil beam scaling that is independent of the scattering power model and appropriate for use with any expression for the radial kernel width in water as a function of depth. Methods: Using Fermi-Eyges transport theory, the authors derive an expression for the radial pencil beam width in heterogeneous slab geometries which is independent of the proton scattering power and related quantities. The authors then perform test calculations in homogeneous and heterogeneous slab phantoms using both the original 2D scaling model and the new model with expressions for the radial kernel width in water computed from both local and nonlocal scattering power models, as well as a nonlocal parameterization of Molière scattering theory. In addition to kernel width calculations, dose calculations are also performed for a narrow Gaussian proton beam. Results: Pencil beam width calculations indicate that both 2D scaling formalisms perform well when the radial kernel width in water is derived from a local scattering power model. Computing the radial kernel width from a nonlocal scattering model results in the local 2D scaling formula under-predicting the pencil beam width by as

  9. Effect of nonlinear three-dimensional optimized reconstruction algorithm filter on image quality and radiation dose: validation on phantoms.

    PubMed

    Bai, Mei; Chen, Jiuhong; Raupach, Rainer; Suess, Christoph; Tao, Ying; Peng, Mingchen

    2009-01-01

    A new technique called the nonlinear three-dimensional optimized reconstruction algorithm filter (3D ORA filter) is currently used to improve CT image quality and reduce radiation dose. This technical note describes the comparison of image noise, slice sensitivity profile (SSP), contrast-to-noise ratio, and modulation transfer function (MTF) on phantom images processed with and without the 3D ORA filter, and the effect of the 3D ORA filter on CT images at a reduced dose. For CT head scans the noise reduction was up to 54% with typical bone reconstruction algorithms (H70) and a 0.6 mm slice thickness; for liver CT scans the noise reduction was up to 30% with typical high-resolution reconstruction algorithms (B70) and a 0.6 mm slice thickness. MTF and SSP did not change significantly with the application of 3D ORA filtering (P > 0.05), whereas noise was reduced (P < 0.05). The low contrast detectability and MTF of images obtained at a reduced dose and filtered by the 3D ORA were equivalent to those of standard dose CT images; there was no significant difference in image noise of scans taken at a reduced dose, filtered using 3D ORA and standard dose CT (P > 0.05). The 3D ORA filter shows good potential for reducing image noise without affecting image quality attributes such as sharpness. By applying this approach, the same image quality can be achieved whilst gaining a marked dose reduction.

  10. Effect of nonlinear three-dimensional optimized reconstruction algorithm filter on image quality and radiation dose: Validation on phantoms

    SciTech Connect

    Bai Mei; Chen Jiuhong; Raupach, Rainer; Suess, Christoph; Tao Ying; Peng Mingchen

    2009-01-15

    A new technique called the nonlinear three-dimensional optimized reconstruction algorithm filter (3D ORA filter) is currently used to improve CT image quality and reduce radiation dose. This technical note describes the comparison of image noise, slice sensitivity profile (SSP), contrast-to-noise ratio, and modulation transfer function (MTF) on phantom images processed with and without the 3D ORA filter, and the effect of the 3D ORA filter on CT images at a reduced dose. For CT head scans the noise reduction was up to 54% with typical bone reconstruction algorithms (H70) and a 0.6 mm slice thickness; for liver CT scans the noise reduction was up to 30% with typical high-resolution reconstruction algorithms (B70) and a 0.6 mm slice thickness. MTF and SSP did not change significantly with the application of 3D ORA filtering (P>0.05), whereas noise was reduced (P<0.05). The low contrast detectability and MTF of images obtained at a reduced dose and filtered by the 3D ORA were equivalent to those of standard dose CT images; there was no significant difference in image noise of scans taken at a reduced dose, filtered using 3D ORA and standard dose CT (P>0.05). The 3D ORA filter shows good potential for reducing image noise without affecting image quality attributes such as sharpness. By applying this approach, the same image quality can be achieved whilst gaining a marked dose reduction.

  11. Feasibility study of a simple approximation algorithm for in-vivo dose reconstruction by using the transit dose measured using an EPID

    NASA Astrophysics Data System (ADS)

    Hwang, Ui-Jung; Song, Mi Hee; Baek, Tae Seong; Chung, Eun Ji; Yoon, Myonggeun

    2015-02-01

    The purpose of this study is to verify the accuracy of the dose delivered to the patient during intensity-modulated radiation therapy (IMRT) by using in-vivo dosimetry and to avoid accidental exposure to healthy tissues and organs close to tumors. The in-vivo dose was reconstructed by back projection of the transit dose with a simple approximation that considered only the percent depth dose and inverse square law. While the average gamma index for comparisons of dose distributions between the calculated dose map and the film measurement was less than the one for 96.3% of all pixels with the homogeneous phantom, the passing rate was reduced to 92.8% with the inhomogeneous phantom, suggesting that the reduction was apparently due to the inaccuracy of the reconstruction algorithm for inhomogeneity. The proposed method of calculating the dose inside a phantom was of comparable or better accuracy than the treatment planning system, suggesting that it can be used to verify the accuracy of the dose delivered to the patient during treatment.

  12. Clinical implementation of a digital tomosynthesis-based seed reconstruction algorithm for intraoperative postimplant dose evaluation in low dose rate prostate brachytherapy

    SciTech Connect

    Brunet-Benkhoucha, Malik; Verhaegen, Frank; Lassalle, Stephanie; Beliveau-Nadeau, Dominic; Reniers, Brigitte; Donath, David; Taussky, Daniel; Carrier, Jean-Francois

    2009-11-15

    Purpose: The low dose rate brachytherapy procedure would benefit from an intraoperative postimplant dosimetry verification technique to identify possible suboptimal dose coverage and suggest a potential reimplantation. The main objective of this project is to develop an efficient, operator-free, intraoperative seed detection technique using the imaging modalities available in a low dose rate brachytherapy treatment room. Methods: This intraoperative detection allows a complete dosimetry calculation that can be performed right after an I-125 prostate seed implantation, while the patient is still under anesthesia. To accomplish this, a digital tomosynthesis-based algorithm was developed. This automatic filtered reconstruction of the 3D volume requires seven projections acquired over a total angle of 60 deg. with an isocentric imaging system. Results: A phantom study was performed to validate the technique that was used in a retrospective clinical study involving 23 patients. In the patient study, the automatic tomosynthesis-based reconstruction yielded seed detection rates of 96.7% and 2.6% false positives. The seed localization error obtained with a phantom study is 0.4{+-}0.4 mm. The average time needed for reconstruction is below 1 min. The reconstruction algorithm also provides the seed orientation with an uncertainty of 10 deg. {+-}8 deg. The seed detection algorithm presented here is reliable and was efficiently used in the clinic. Conclusions: When combined with an appropriate coregistration technique to identify the organs in the seed coordinate system, this algorithm will offer new possibilities for a next generation of clinical brachytherapy systems.

  13. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  14. Accuracy of pencil-beam redefinition algorithm dose calculations in patient-like cylindrical phantoms for bolus electron conformal therapy

    SciTech Connect

    Carver, Robert L.; Hogstrom, Kenneth R.; Chu, Connel; Fields, Robert S.; Sprunger, Conrad P.

    2013-07-15

    Purpose: The purpose of this study was to document the improved accuracy of the pencil beam redefinition algorithm (PBRA) compared to the pencil beam algorithm (PBA) for bolus electron conformal therapy using cylindrical patient phantoms based on patient computed tomography (CT) scans of retromolar trigone and nose cancer.Methods: PBRA and PBA electron dose calculations were compared with measured dose in retromolar trigone and nose phantoms both with and without bolus. For the bolus treatment plans, a radiation oncologist outlined a planning target volume (PTV) on the central axis slice of the CT scan for each phantom. A bolus was designed using the planning.decimal{sup Registered-Sign} (p.d) software (.decimal, Inc., Sanford, FL) to conform the 90% dose line to the distal surface of the PTV. Dose measurements were taken with thermoluminescent dosimeters placed into predrilled holes. The Pinnacle{sup 3} (Philips Healthcare, Andover, MD) treatment planning system was used to calculate PBA dose distributions. The PBRA dose distributions were calculated with an in-house C++ program. In order to accurately account for the phantom materials a table correlating CT number to relative electron stopping and scattering powers was compiled and used for both PBA and PBRA dose calculations. Accuracy was determined by comparing differences in measured and calculated dose, as well as distance to agreement for each measurement point.Results: The measured doses had an average precision of 0.9%. For the retromolar trigone phantom, the PBRA dose calculations had an average {+-}1{sigma} dose difference (calculated - measured) of -0.65%{+-} 1.62% without the bolus and -0.20%{+-} 1.54% with the bolus. The PBA dose calculation had an average dose difference of 0.19%{+-} 3.27% without the bolus and -0.05%{+-} 3.14% with the bolus. For the nose phantom, the PBRA dose calculations had an average dose difference of 0.50%{+-} 3.06% without bolus and -0.18%{+-} 1.22% with the bolus. The PBA

  15. Comparison of Nine Statistical Model Based Warfarin Pharmacogenetic Dosing Algorithms Using the Racially Diverse International Warfarin Pharmacogenetic Consortium Cohort Database

    PubMed Central

    Liu, Rong; Li, Xi; Zhang, Wei; Zhou, Hong-Hao

    2015-01-01

    Objective Multiple linear regression (MLR) and machine learning techniques in pharmacogenetic algorithm-based warfarin dosing have been reported. However, performances of these algorithms in racially diverse group have never been objectively evaluated and compared. In this literature-based study, we compared the performances of eight machine learning techniques with those of MLR in a large, racially-diverse cohort. Methods MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied in warfarin dose algorithms in a cohort from the International Warfarin Pharmacogenetics Consortium database. Covariates obtained by stepwise regression from 80% of randomly selected patients were used to develop algorithms. To compare the performances of these algorithms, the mean percentage of patients whose predicted dose fell within 20% of the actual dose (mean percentage within 20%) and the mean absolute error (MAE) were calculated in the remaining 20% of patients. The performances of these techniques in different races, as well as the dose ranges of therapeutic warfarin were compared. Robust results were obtained after 100 rounds of resampling. Results BART, MARS and SVR were statistically indistinguishable and significantly out performed all the other approaches in the whole cohort (MAE: 8.84–8.96 mg/week, mean percentage within 20%: 45.88%–46.35%). In the White population, MARS and BART showed higher mean percentage within 20% and lower mean MAE than those of MLR (all p values < 0.05). In the Asian population, SVR, BART, MARS and LAR performed the same as MLR. MLR and LAR optimally performed among the Black population. When patients were grouped in terms of warfarin dose range, all machine learning techniques except ANN and LAR showed significantly

  16. TH-E-BRE-07: Development of Dose Calculation Error Predictors for a Widely Implemented Clinical Algorithm

    SciTech Connect

    Egan, A; Laub, W

    2014-06-15

    Purpose: Several shortcomings of the current implementation of the analytic anisotropic algorithm (AAA) may lead to dose calculation errors in highly modulated treatments delivered to highly heterogeneous geometries. Here we introduce a set of dosimetric error predictors that can be applied to a clinical treatment plan and patient geometry in order to identify high risk plans. Once a problematic plan is identified, the treatment can be recalculated with more accurate algorithm in order to better assess its viability. Methods: Here we focus on three distinct sources dosimetric error in the AAA algorithm. First, due to a combination of discrepancies in smallfield beam modeling as well as volume averaging effects, dose calculated through small MLC apertures can be underestimated, while that behind small MLC blocks can overestimated. Second, due the rectilinear scaling of the Monte Carlo generated pencil beam kernel, energy is not properly transported through heterogeneities near, but not impeding, the central axis of the beamlet. And third, AAA overestimates dose in regions very low density (< 0.2 g/cm{sup 3}). We have developed an algorithm to detect the location and magnitude of each scenario within the patient geometry, namely the field-size index (FSI), the heterogeneous scatter index (HSI), and the lowdensity index (LDI) respectively. Results: Error indices successfully identify deviations between AAA and Monte Carlo dose distributions in simple phantom geometries. Algorithms are currently implemented in the MATLAB computing environment and are able to run on a typical RapidArc head and neck geometry in less than an hour. Conclusion: Because these error indices successfully identify each type of error in contrived cases, with sufficient benchmarking, this method can be developed into a clinical tool that may be able to help estimate AAA dose calculation errors and when it might be advisable to use Monte Carlo calculations.

  17. Development of a phantom to validate high-dose-rate brachytherapy treatment planning systems with heterogeneous algorithms

    SciTech Connect

    Moura, Eduardo S.; Rostelato, Maria Elisa C. M.; Zeituni, Carlos A.

    2015-04-15

    Purpose: This work presents the development of a phantom to verify the treatment planning system (TPS) algorithms used for high-dose-rate (HDR) brachytherapy. It is designed to measure the relative dose in a heterogeneous media. The experimental details used, simulation methods, and comparisons with a commercial TPS are also provided. Methods: To simulate heterogeneous conditions, four materials were used: Virtual Water™ (VM), BR50/50™, cork, and aluminum. The materials were arranged in 11 heterogeneity configurations. Three dosimeters were used to measure the relative response from a HDR {sup 192}Ir source: TLD-100™, Gafchromic{sup ®} EBT3 film, and an Exradin™ A1SL ionization chamber. To compare the results from the experimental measurements, the various configurations were modeled in the PENELOPE/penEasy Monte Carlo code. Images of each setup geometry were acquired from a CT scanner and imported into BrachyVision™ TPS software, which includes a grid-based Boltzmann solver Acuros™. The results of the measurements performed in the heterogeneous setups were normalized to the dose values measured in the homogeneous Virtual Water™ setup and the respective differences due to the heterogeneities were considered. Additionally, dose values calculated based on the American Association of Physicists in Medicine-Task Group 43 formalism were compared to dose values calculated with the Acuros™ algorithm in the phantom. Calculated doses were compared at the same points, where measurements have been performed. Results: Differences in the relative response as high as 11.5% were found from the homogeneous setup when the heterogeneous materials were inserted into the experimental phantom. The aluminum and cork materials produced larger differences than the plastic materials, with the BR50/50™ material producing results similar to the Virtual Water™ results. Our experimental methods agree with the PENELOPE/penEasy simulations for most setups and dosimeters. The

  18. Photon beam dosimetry with EBT3 film in heterogeneous regions: Application to the evaluation of dose-calculation algorithms

    NASA Astrophysics Data System (ADS)

    Jung, Hyunuk; Kum, Oyeon; Han, Youngyih; Park, Byungdo; Cheong, Kwang-Ho

    2014-12-01

    For a better understanding of the accuracy of state-of-the-art-radiation therapies, 2-dimensional dosimetry in a patient-like environment will be helpful. Therefore, the dosimetry of EBT3 films in non-water-equivalent tissues was investigated, and the accuracy of commercially-used dose-calculation algorithms was evaluated with EBT3 measurement. Dose distributions were measured with EBT3 films for an in-house-designed phantom that contained a lung or a bone substitute, i.e., an air cavity (3 × 3 × 3 cm3) or teflon (2 × 2 × 2 cm3 or 3 × 3 × 3 cm3), respectively. The phantom was irradiated with 6-MV X-rays with field sizes of 2 × 2, 3 × 3, and 5 × 5 cm2. The accuracy of EBT3 dosimetry was evaluated by comparing the measured dose with the dose obtained from Monte Carlo (MC) simulations. A dose-to-bone-equivalent material was obtained by multiplying the EBT3 measurements by the stopping power ratio (SPR). The EBT3 measurements were then compared with the predictions from four algorithms: Monte Carlo (MC) in iPlan, acuros XB (AXB), analytical anisotropic algorithm (AAA) in Eclipse, and superposition-convolution (SC) in Pinnacle. For the air cavity, the EBT3 measurements agreed with the MC calculation to within 2% on average. For teflon, the EBT3 measurements differed by 9.297% (±0.9229%) on average from the Monte Carlo calculation before dose conversion, and by 0.717% (±0.6546%) after applying the SPR. The doses calculated by using the MC, AXB, AAA, and SC algorithms for the air cavity differed from the EBT3 measurements on average by 2.174, 2.863, 18.01, and 8.391%, respectively; for teflon, the average differences were 3.447, 4.113, 7.589, and 5.102%. The EBT3 measurements corrected with the SPR agreed with 2% on average both within and beyond the heterogeneities with MC results, thereby indicating that EBT3 dosimetry can be used in heterogeneous media. The MC and the AXB dose calculation algorithms exhibited clinically-acceptable accuracy (<5%) in

  19. Assessing image quality and dose reduction of a new x-ray computed tomography iterative reconstruction algorithm using model observers

    SciTech Connect

    Tseng, Hsin-Wu Kupinski, Matthew A.; Fan, Jiahua; Sainath, Paavana; Hsieh, Jiang

    2014-07-15

    Purpose: A number of different techniques have been developed to reduce radiation dose in x-ray computed tomography (CT) imaging. In this paper, the authors will compare task-based measures of image quality of CT images reconstructed by two algorithms: conventional filtered back projection (FBP), and a new iterative reconstruction algorithm (IR). Methods: To assess image quality, the authors used the performance of a channelized Hotelling observer acting on reconstructed image slices. The selected channels are dense difference Gaussian channels (DDOG).A body phantom and a head phantom were imaged 50 times at different dose levels to obtain the data needed to assess image quality. The phantoms consisted of uniform backgrounds with low contrast signals embedded at various locations. The tasks the observer model performed included (1) detection of a signal of known location and shape, and (2) detection and localization of a signal of known shape. The employed DDOG channels are based on the response of the human visual system. Performance was assessed using the areas under ROC curves and areas under localization ROC curves. Results: For signal known exactly (SKE) and location unknown/signal shape known tasks with circular signals of different sizes and contrasts, the authors’ task-based measures showed that a FBP equivalent image quality can be achieved at lower dose levels using the IR algorithm. For the SKE case, the range of dose reduction is 50%–67% (head phantom) and 68%–82% (body phantom). For the study of location unknown/signal shape known, the dose reduction range can be reached at 67%–75% for head phantom and 67%–77% for body phantom case. These results suggest that the IR images at lower dose settings can reach the same image quality when compared to full dose conventional FBP images. Conclusions: The work presented provides an objective way to quantitatively assess the image quality of a newly introduced CT IR algorithm. The performance of the

  20. Influence of radiation dose and reconstruction algorithm in MDCT assessment of airway wall thickness: A phantom study

    SciTech Connect

    Gomez-Cardona, Daniel; Nagle, Scott K.; Li, Ke; Chen, Guang-Hong; Robinson, Terry E.

    2015-10-15

    Purpose: Wall thickness (WT) is an airway feature of great interest for the assessment of morphological changes in the lung parenchyma. Multidetector computed tomography (MDCT) has recently been used to evaluate airway WT, but the potential risk of radiation-induced carcinogenesis—particularly in younger patients—might limit a wider use of this imaging method in clinical practice. The recent commercial implementation of the statistical model-based iterative reconstruction (MBIR) algorithm, instead of the conventional filtered back projection (FBP) algorithm, has enabled considerable radiation dose reduction in many other clinical applications of MDCT. The purpose of this work was to study the impact of radiation dose and MBIR in the MDCT assessment of airway WT. Methods: An airway phantom was scanned using a clinical MDCT system (Discovery CT750 HD, GE Healthcare) at 4 kV levels and 5 mAs levels. Both FBP and a commercial implementation of MBIR (Veo{sup TM}, GE Healthcare) were used to reconstruct CT images of the airways. For each kV–mAs combination and each reconstruction algorithm, the contrast-to-noise ratio (CNR) of the airways was measured, and the WT of each airway was measured and compared with the nominal value; the relative bias and the angular standard deviation in the measured WT were calculated. For each airway and reconstruction algorithm, the overall performance of WT quantification across all of the 20 kV–mAs combinations was quantified by the sum of squares (SSQs) of the difference between the measured and nominal WT values. Finally, the particular kV–mAs combination and reconstruction algorithm that minimized radiation dose while still achieving a reference WT quantification accuracy level was chosen as the optimal acquisition and reconstruction settings. Results: The wall thicknesses of seven airways of different sizes were analyzed in the study. Compared with FBP, MBIR improved the CNR of the airways, particularly at low radiation dose

  1. Development of an algorithm to improve the accuracy of dose delivery in Gamma Knife radiosurgery

    NASA Astrophysics Data System (ADS)

    Cernica, George Dumitru

    2007-12-01

    Gamma Knife stereotactic radiosurgery has demonstrated decades of successful treatments. Despite its high spatial accuracy, the Gamma Knife's planning software, GammaPlan, uses a simple exponential as the TPR curve for all four collimator sizes, and a skull scaling device to acquire ruler measurements to interpolate a threedimensional spline to model the patient's skull. The consequences of these approximations have not been previously investigated. The true TPR curves of the four collimators were measured by blocking 200 of the 201 sources with steel plugs. Additional attenuation was provided through the use of a 16 cm tungsten sphere, designed to enable beamlet measurements along one axis. TPR, PDD, and beamlet profiles were obtained using both an ion chamber and GafChromic EBT film for all collimators. Additionally, an in-house planning algorithm able to calculate the contour of the skull directly from an image set and implement the measured beamlet data in shot time calculations was developed. Clinical and theoretical Gamma Knife cases were imported into our algorithm. The TPR curves showed small deviations from a simple exponential curve, with average discrepancies under 1%, but with a maximum discrepancy of 2% found for the 18 mm collimator beamlet at shallow depths. The consequences on the PDD of the of the beamlets were slight, with a maximum of 1.6% found with the 18 mm collimator beamlet. Beamlet profiles of the 4 mm, 8 mm, and 14 mm showed some underestimates of the off-axis ratio near the shoulders (up to 10%). The toes of the profiles were underestimated for all collimators, with differences up to 7%. Shot times were affected by up to 1.6% due to TPR differences, but clinical cases showed deviations by no more than 0.5%. The beamlet profiles affected the dose calculations more significantly, with shot time calculations differing by as much as 0.8%. The skull scaling affected the shot time calculations the most significantly, with differences of up to 5

  2. Extracting Gene Networks for Low-Dose Radiation Using Graph Theoretical Algorithms

    PubMed Central

    Voy, Brynn H; Scharff, Jon A; Perkins, Andy D; Saxton, Arnold M; Borate, Bhavesh; Chesler, Elissa J; Branstetter, Lisa K; Langston, Michael A

    2006-01-01

    Genes with common functions often exhibit correlated expression levels, which can be used to identify sets of interacting genes from microarray data. Microarrays typically measure expression across genomic space, creating a massive matrix of co-expression that must be mined to extract only the most relevant gene interactions. We describe a graph theoretical approach to extracting co-expressed sets of genes, based on the computation of cliques. Unlike the results of traditional clustering algorithms, cliques are not disjoint and allow genes to be assigned to multiple sets of interacting partners, consistent with biological reality. A graph is created by thresholding the correlation matrix to include only the correlations most likely to signify functional relationships. Cliques computed from the graph correspond to sets of genes for which significant edges are present between all members of the set, representing potential members of common or interacting pathways. Clique membership can be used to infer function about poorly annotated genes, based on the known functions of better-annotated genes with which they share clique membership (i.e., “guilt-by-association”). We illustrate our method by applying it to microarray data collected from the spleens of mice exposed to low-dose ionizing radiation. Differential analysis is used to identify sets of genes whose interactions are impacted by radiation exposure. The correlation graph is also queried independently of clique to extract edges that are impacted by radiation. We present several examples of multiple gene interactions that are altered by radiation exposure and thus represent potential molecular pathways that mediate the radiation response. PMID:16854212

  3. SU-E-T-477: An Efficient Dose Correction Algorithm Accounting for Tissue Heterogeneities in LDR Brachytherapy

    SciTech Connect

    Mashouf, S; Lai, P; Karotki, A; Keller, B; Beachey, D; Pignol, J

    2014-06-01

    Purpose: Seed brachytherapy is currently used for adjuvant radiotherapy of early stage prostate and breast cancer patients. The current standard for calculation of dose surrounding the brachytherapy seeds is based on American Association of Physicist in Medicine Task Group No. 43 (TG-43 formalism) which generates the dose in homogeneous water medium. Recently, AAPM Task Group No. 186 emphasized the importance of accounting for tissue heterogeneities. This can be done using Monte Carlo (MC) methods, but it requires knowing the source structure and tissue atomic composition accurately. In this work we describe an efficient analytical dose inhomogeneity correction algorithm implemented using MIM Symphony treatment planning platform to calculate dose distributions in heterogeneous media. Methods: An Inhomogeneity Correction Factor (ICF) is introduced as the ratio of absorbed dose in tissue to that in water medium. ICF is a function of tissue properties and independent of source structure. The ICF is extracted using CT images and the absorbed dose in tissue can then be calculated by multiplying the dose as calculated by the TG-43 formalism times ICF. To evaluate the methodology, we compared our results with Monte Carlo simulations as well as experiments in phantoms with known density and atomic compositions. Results: The dose distributions obtained through applying ICF to TG-43 protocol agreed very well with those of Monte Carlo simulations as well as experiments in all phantoms. In all cases, the mean relative error was reduced by at least 50% when ICF correction factor was applied to the TG-43 protocol. Conclusion: We have developed a new analytical dose calculation method which enables personalized dose calculations in heterogeneous media. The advantages over stochastic methods are computational efficiency and the ease of integration into clinical setting as detailed source structure and tissue segmentation are not needed. University of Toronto, Natural Sciences and

  4. Comparison of dose calculation algorithms in phantoms with lung equivalent heterogeneities under conditions of lateral electronic disequilibrium

    SciTech Connect

    Carrasco, P.; Jornet, N.; Duch, M.A.; Weber, L.; Ginjaume, M.; Eudaldo, T.; Jurado, D.; Ruiz, A.; Ribas, M.

    2004-10-01

    An extensive set of benchmark measurement of PDDs and beam profiles was performed in a heterogeneous layer phantom, including a lung equivalent heterogeneity, by means of several detectors and compared against the predicted dose values by different calculation algorithms in two treatment planning systems. PDDs were measured with TLDs, plane parallel and cylindrical ionization chambers and beam profiles with films. Additionally, Monte Carlo simulations by meansof the PENELOPE code were performed. Four different field sizes (10x10, 5x5, 2x2, and1x1 cm{sup 2}) and two lung equivalent materials (CIRS, {rho}{sub e}{sup w}=0.195 and St. Bartholomew Hospital, London, {rho}{sub e}{sup w}=0.244-0.322) were studied. The performance of four correction-based algorithms and one based on convolution-superposition was analyzed. The correction-based algorithms were the Batho, the Modified Batho, and the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system and the TMS Pencil Beam from the Helax-TMS (Nucletron) treatment planning system. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. The only studied calculation methods that correlated successfully with the measured values with a 2% average inside all media were the Collapsed Cone and the Monte Carlo simulation. The biggest difference between the predicted and the delivered dose in the beam axis was found for the EqTAR algorithm inside the CIRS lung equivalent material in a 2x2 cm{sup 2} 18 MV x-ray beam. In these conditions, average and maximum difference against the TLD measurements were 32% and 39%, respectively. In the water equivalent part of the phantom every algorithm correctly predicted the dose (within 2%) everywhere except very close to the interfaces where differences up to 24% were found for 2x2 cm{sup 2} 18 MV photon beams. Consistent values were found between the reference detector (ionization chamber in water and TLD in lung) and Monte Carlo

  5. Prospective Evaluation of Prior Image Constrained Compressed Sensing (PICCS) Algorithm in Abdominal CT: A comparison of reduced dose with standard dose imaging

    PubMed Central

    Lubner, Meghan G.; Pickhardt, Perry J.; Kim, David H.; Tang, Jie; Munoz del Rio, Alejandro; Chen, Guang-Hong

    2014-01-01

    Purpose To prospectively study CT dose reduction using the “prior image constrained compressed sensing” (PICCS) reconstruction technique. Methods Immediately following routine standard dose (SD) abdominal MDCT, 50 patients (mean age, 57.7 years; mean BMI, 28.8) underwent a second reduced-dose (RD) scan (targeted dose reduction, 70-90%). DLP, CTDIvol and SSDE were compared. Several reconstruction algorithms (FBP, ASIR, and PICCS) were applied to the RD series. SD images with FBP served as reference standard. Two blinded readers evaluated each series for subjective image quality and focal lesion detection. Results Mean DLP, CTDIvol, and SSDE for RD series was 140.3 mGy*cm (median 79.4), 3.7 mGy (median 1.8), and 4.2 mGy (median 2.3) compared with 493.7 mGy*cm (median 345.8), 12.9 mGy (median 7.9 mGy) and 14.6 mGy (median 10.1) for SD series, respectively. Mean effective patient diameter was 30.1 cm (median 30), which translates to a mean SSDE reduction of 72% (p<0.001). RD-PICCS image quality score was 2.8±0.5, improved over the RD-FBP (1.7±0.7) and RD-ASIR(1.9±0.8)(p<0.001), but lower than SD (3.5±0.5)(p<0.001). Readers detected 81% (184/228) of focal lesions on RD-PICCS series, versus 67% (153/228) and 65% (149/228) for RD-FBP and RD-ASIR, respectively. Mean image noise was significantly reduced on RD-PICCS series (13.9 HU) compared with RD-FBP (57.2) and RD-ASIR (44.1) (p<0.001). Conclusion PICCS allows for marked dose reduction at abdominal CT with improved image quality and diagnostic performance over reduced-dose FBP and ASIR. Further study is needed to determine indication-specific dose reduction levels that preserve acceptable diagnostic accuracy relative to higher-dose protocols. PMID:24943136

  6. A matter of timing: identifying significant multi-dose radiotherapy improvements by numerical simulation and genetic algorithm search.

    PubMed

    Angus, Simon D; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17-18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost-effecitive means

  7. Effects of computational phantoms on the effective dose and two-dosimeter algorithm for external photon beams.

    PubMed

    Karimi-Shahri, K; Rafat-Motavalli, L; Miri-Hakimabad, H; Liu, L; Li, J

    2016-09-01

    In this study, the effect of computational phantoms on the effective dose (E), dosimeter responses positioned on the front (chest) and back of phantom, and two-dosimeter algorithm was investigated for external photon beams. This study was performed using Korean Typical MAN-2 (KTMAN-2), Chinese Reference Adult Male (CRAM), ICRP male reference, and Male Adult meSH (MASH) reference phantoms. Calculations were performed for beam directions in different polar and azimuthal angles using the Monte Carlo code of MCNP at energies of 0.08, 0.3, and 1MeV. Results show that the body shape significantly affects E and two-dosimeter responses when the dosimeters are indirectly irradiated. The acquired two-dosimeter algorithms are almost the same for all the mentioned phantoms except for KTMAN-2. Comparisons between the obtained E and estimated E (Eest), acquired from two-dosimeter algorithm, illustrate that the Eest is overestimated in overhead (OH) and underfoot (UF) directions. The effect of using one algorithm for all phantoms was also investigated. Results show that application of one algorithm to all reference phantoms is possible. PMID:27389880

  8. Comparison of build-up region doses in oblique tangential 6 MV photon beams calculated by AAA and CCC algorithms in breast Rando phantom

    NASA Astrophysics Data System (ADS)

    Masunun, P.; Tangboonduangjit, P.; Dumrongkijudom, N.

    2016-03-01

    The purpose of this study is to compare the build-up region doses on breast Rando phantom surface with the bolus covered, the doses in breast Rando phantom and also the doses in a lung that is the heterogeneous region by two algorithms. The AAA in Eclipse TPS and the collapsed cone convolution algorithm in Pinnacle treatment planning system were used to plan in tangential field technique with 6 MV photon beam at 200 cGy total doses in Breast Rando phantom with bolus covered (5 mm and 10 mm). TLDs were calibrated with Cobalt-60 and used to measure the doses in irradiation process. The results in treatment planning show that the doses in build-up region and the doses in breast phantom were closely matched in both algorithms which are less than 2% differences. However, overestimate of doses in a lung (L2) were found in AAA with 13.78% and 6.06% differences at 5 mm and 10 mm bolus thickness, respectively when compared with CCC algorithm. The TLD measurements show the underestimate in buildup region and in breast phantom but the doses in a lung (L2) were overestimated when compared with the doses in the two plannings at both thicknesses of the bolus.

  9. Difference in dose-volumetric data between the analytical anisotropic algorithm, the dose-to-medium, and the dose-to-water reporting modes of the Acuros XB for lung stereotactic body radiation therapy.

    PubMed

    Mampuya, Wambaka A; Nakamura, Mitsuhiro; Hirose, Yoshinori; Kitsuda, Kenji; Ishigaki, Takashi; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    The purpose of this study was to evaluate the difference in dose-volumetric data between the analytical anisotropic algorithms (AAA) and the two dose reporting modes of the Acuros XB, namely, the dose to water (AXB_Dw) and dose to medium (AXB_Dm) in lung stereotactic body radiotherapy (SBRT). Thirty-eight plans were generated using the AXB_Dm in Eclipse Treatment Planning System (TPS) and then recalculated with the AXB_Dw and AAA, using identical beam setup. A dose of 50 Gy in 4 fractions was prescribed to the isocenter and the planning target volume (PTV) D95%. The isocenter was always inside the PTV. The following dose-volumetric parameters were evaluated; D2%, D50%, D95%, and D98% for the internal target volume (ITV) and the PTV. Two-tailed paired Student's t-tests determined the statistical significance. Although for most of the parameters evaluated, the mean differences observed between the AAA, AXB_Dm, and AXB_Dw were statistically significant (p < 0.05), absolute differences were rather small, in general less than 5% points. The maximum mean difference was observed in the ITV D50% between the AXB_Dm and the AAA and was 1.7% points under the isocenter prescription and 3.3% points under the D95 prescription. AXB_Dm produced higher values than AXB_Dw with differences ranging from 0.4 to 1.1% points under isocenter prescription and 0.0 to 0.7% points under the PTV D95% prescription. The differences observed under the PTV D95% prescription were larger compared to those observed for the isocenter prescription between AXB_Dm and AAA, AXB_Dm and AXB_Dw, and AXB_Dw and AAA. Although statistically significant, the mean differences between the three algorithms are within 3.3% points. PMID:27685138

  10. Difference in dose-volumetric data between the analytical anisotropic algorithm, the dose-to-medium, and the dose-to-water reporting modes of the Acuros XB for lung stereotactic body radiation therapy.

    PubMed

    Mampuya, Wambaka A; Nakamura, Mitsuhiro; Hirose, Yoshinori; Kitsuda, Kenji; Ishigaki, Takashi; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    The purpose of this study was to evaluate the difference in dose-volumetric data between the analytical anisotropic algorithms (AAA) and the two dose reporting modes of the Acuros XB, namely, the dose to water (AXB_Dw) and dose to medium (AXB_Dm) in lung stereotactic body radiotherapy (SBRT). Thirty-eight plans were generated using the AXB_Dm in Eclipse Treatment Planning System (TPS) and then recalculated with the AXB_Dw and AAA, using identical beam setup. A dose of 50 Gy in 4 fractions was prescribed to the isocenter and the planning target volume (PTV) D95%. The isocenter was always inside the PTV. The following dose-volumetric parameters were evaluated; D2%, D50%, D95%, and D98% for the internal target volume (ITV) and the PTV. Two-tailed paired Student's t-tests determined the statistical significance. Although for most of the parameters evaluated, the mean differences observed between the AAA, AXB_Dm, and AXB_Dw were statistically significant (p < 0.05), absolute differences were rather small, in general less than 5% points. The maximum mean difference was observed in the ITV D50% between the AXB_Dm and the AAA and was 1.7% points under the isocenter prescription and 3.3% points under the D95 prescription. AXB_Dm produced higher values than AXB_Dw with differences ranging from 0.4 to 1.1% points under isocenter prescription and 0.0 to 0.7% points under the PTV D95% prescription. The differences observed under the PTV D95% prescription were larger compared to those observed for the isocenter prescription between AXB_Dm and AAA, AXB_Dm and AXB_Dw, and AXB_Dw and AAA. Although statistically significant, the mean differences between the three algorithms are within 3.3% points.

  11. The accuracy of the out-of-field dose calculations using a model based algorithm in a commercial treatment planning system

    NASA Astrophysics Data System (ADS)

    Wang, Lilie; Ding, George X.

    2014-07-01

    The out-of-field dose can be clinically important as it relates to the dose of the organ-at-risk, although the accuracy of its calculation in commercial radiotherapy treatment planning systems (TPSs) receives less attention. This study evaluates the uncertainties of out-of-field dose calculated with a model based dose calculation algorithm, anisotropic analytical algorithm (AAA), implemented in a commercial radiotherapy TPS, Varian Eclipse V10, by using Monte Carlo (MC) simulations, in which the entire accelerator head is modeled including the multi-leaf collimators. The MC calculated out-of-field doses were validated by experimental measurements. The dose calculations were performed in a water phantom as well as CT based patient geometries and both static and highly modulated intensity-modulated radiation therapy (IMRT) fields were evaluated. We compared the calculated out-of-field doses, defined as lower than 5% of the prescription dose, in four H&N cancer patients and two lung cancer patients treated with volumetric modulated arc therapy (VMAT) and IMRT techniques. The results show that the discrepancy of calculated out-of-field dose profiles between AAA and the MC depends on the depth and is generally less than 1% for in water phantom comparisons and in CT based patient dose calculations for static field and IMRT. In cases of VMAT plans, the difference between AAA and MC is <0.5%. The clinical impact resulting from the error on the calculated organ doses were analyzed by using dose-volume histograms. Although the AAA algorithm significantly underestimated the out-of-field doses, the clinical impact on the calculated organ doses in out-of-field regions may not be significant in practice due to very low out-of-field doses relative to the target dose.

  12. Pharmacogenetics-based warfarin dosing algorithm decreases time to stable anticoagulation and the risk of major hemorrhage: an updated meta-analysis of randomized controlled trials.

    PubMed

    Wang, Zhi-Quan; Zhang, Rui; Zhang, Peng-Pai; Liu, Xiao-Hong; Sun, Jian; Wang, Jun; Feng, Xiang-Fei; Lu, Qiu-Fen; Li, Yi-Gang

    2015-04-01

    Warfarin is yet the most widely used oral anticoagulant for thromboembolic diseases, despite the recently emerged novel anticoagulants. However, difficulty in maintaining stable dose within the therapeutic range and subsequent serious adverse effects markedly limited its use in clinical practice. Pharmacogenetics-based warfarin dosing algorithm is a recently emerged strategy to predict the initial and maintaining dose of warfarin. However, whether this algorithm is superior over conventional clinically guided dosing algorithm remains controversial. We made a comparison of pharmacogenetics-based versus clinically guided dosing algorithm by an updated meta-analysis. We searched OVID MEDLINE, EMBASE, and the Cochrane Library for relevant citations. The primary outcome was the percentage of time in therapeutic range. The secondary outcomes were time to stable therapeutic dose and the risks of adverse events including all-cause mortality, thromboembolic events, total bleedings, and major bleedings. Eleven randomized controlled trials with 2639 participants were included. Our pooled estimates indicated that pharmacogenetics-based dosing algorithm did not improve percentage of time in therapeutic range [weighted mean difference, 4.26; 95% confidence interval (CI), -0.50 to 9.01; P = 0.08], but it significantly shortened the time to stable therapeutic dose (weighted mean difference, -8.67; 95% CI, -11.86 to -5.49; P < 0.00001). Additionally, pharmacogenetics-based algorithm significantly reduced the risk of major bleedings (odds ratio, 0.48; 95% CI, 0.23 to 0.98; P = 0.04), but it did not reduce the risks of all-cause mortality, total bleedings, or thromboembolic events. Our results suggest that pharmacogenetics-based warfarin dosing algorithm significantly improves the efficiency of International Normalized Ratio correction and reduces the risk of major hemorrhage.

  13. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms.

    PubMed

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M; Asma, Evren; Kinahan, Paul E; De Man, Bruno

    2015-10-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  14. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms.

    PubMed

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M; Asma, Evren; Kinahan, Paul E; De Man, Bruno

    2015-10-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  15. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-09-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose

  16. Performance evaluation of iterative reconstruction algorithms for achieving CT radiation dose reduction - a phantom study.

    PubMed

    Dodge, Cristina T; Tamm, Eric P; Cody, Dianna D; Liu, Xinming; Jensen, Corey T; Wei, Wei; Kundra, Vikas; Rong, X John

    2016-01-01

    The purpose of this study was to characterize image quality and dose performance with GE CT iterative reconstruction techniques, adaptive statistical iterative recontruction (ASiR), and model-based iterative reconstruction (MBIR), over a range of typical to low-dose intervals using the Catphan 600 and the anthropomorphic Kyoto Kagaku abdomen phantoms. The scope of the project was to quantitatively describe the advantages and limitations of these approaches. The Catphan 600 phantom, supplemented with a fat-equivalent oval ring, was scanned using a GE Discovery HD750 scanner at 120 kVp, 0.8 s rotation time, and pitch factors of 0.516, 0.984, and 1.375. The mA was selected for each pitch factor to achieve CTDIvol values of 24, 18, 12, 6, 3, 2, and 1 mGy. Images were reconstructed at 2.5 mm thickness with filtered back-projection (FBP); 20%, 40%, and 70% ASiR; and MBIR. The potential for dose reduction and low-contrast detectability were evaluated from noise and contrast-to-noise ratio (CNR) measurements in the CTP 404 module of the Catphan. Hounsfield units (HUs) of several materials were evaluated from the cylinder inserts in the CTP 404 module, and the modulation transfer function (MTF) was calculated from the air insert. The results were con-firmed in the anthropomorphic Kyoto Kagaku abdomen phantom at 6, 3, 2, and 1mGy. MBIR reduced noise levels five-fold and increased CNR by a factor of five compared to FBP below 6mGy CTDIvol, resulting in a substantial improvement in image quality. Compared to ASiR and FBP, HU in images reconstructed with MBIR were consistently lower, and this discrepancy was reversed by higher pitch factors in some materials. MBIR improved the conspicuity of the high-contrast spatial resolution bar pattern, and MTF quantification confirmed the superior spatial resolution performance of MBIR versus FBP and ASiR at higher dose levels. While ASiR and FBP were relatively insensitive to changes in dose and pitch, the spatial resolution for MBIR

  17. SU-E-T-481: Dosimetric Comparison of Acuros XB and Anisotropic Analytic Algorithm with Commercial Monte Carlo Based Dose Calculation Algorithm for Stereotactic Body Radiation Therapy of Lung Cancer

    SciTech Connect

    Cao, M; Tenn, S; Lee, C; Yang, Y; Lamb, J; Agazaryan, N; Lee, P; Low, D

    2014-06-01

    Purpose: To evaluate performance of three commercially available treatment planning systems for stereotactic body radiation therapy (SBRT) of lung cancer using the following algorithms: Boltzmann transport equation based algorithm (AcurosXB AXB), convolution based algorithm Anisotropic Analytic Algorithm (AAA); and Monte Carlo based algorithm (XVMC). Methods: A total of 10 patients with early stage non-small cell peripheral lung cancer were included. The initial clinical plans were generated using the XVMC based treatment planning system with a prescription of 54Gy in 3 fractions following RTOG0613 protocol. The plans were recalculated with the same beam parameters and monitor units using AAA and AXB algorithms. A calculation grid size of 2mm was used for all algorithms. The dose distribution, conformity, and dosimetric parameters for the targets and organs at risk (OAR) are compared between the algorithms. Results: The average PTV volume was 19.6mL (range 4.2–47.2mL). The volume of PTV covered by the prescribed dose (PTV-V100) were 93.97±2.00%, 95.07±2.07% and 95.10±2.97% for XVMC, AXB and AAA algorithms, respectively. There was no significant difference in high dose conformity index; however, XVMC predicted slightly higher values (p=0.04) for the ratio of 50% prescription isodose volume to PTV (R50%). The percentage volume of total lungs receiving dose >20Gy (LungV20Gy) were 4.03±2.26%, 3.86±2.22% and 3.85±2.21% for XVMC, AXB and AAA algorithms. Examination of dose volume histograms (DVH) revealed small differences in targets and OARs for most patients. However, the AAA algorithm was found to predict considerable higher PTV coverage compared with AXB and XVMC algorithms in two cases. The dose difference was found to be primarily located at the periphery region of the target. Conclusion: For clinical SBRT lung treatment planning, the dosimetric differences between three commercially available algorithms are generally small except at target periphery. XVMC

  18. A simplified analytical dose calculation algorithm accounting for tissue heterogeneity for low-energy brachytherapy sources.

    PubMed

    Mashouf, Shahram; Lechtman, Eli; Beaulieu, Luc; Verhaegen, Frank; Keller, Brian M; Ravi, Ananth; Pignol, Jean-Philippe

    2013-09-21

    The American Association of Physicists in Medicine Task Group No. 43 (AAPM TG-43) formalism is the standard for seeds brachytherapy dose calculation. But for breast seed implants, Monte Carlo simulations reveal large errors due to tissue heterogeneity. Since TG-43 includes several factors to account for source geometry, anisotropy and strength, we propose an additional correction factor, called the inhomogeneity correction factor (ICF), accounting for tissue heterogeneity for Pd-103 brachytherapy. This correction factor is calculated as a function of the media linear attenuation coefficient and mass energy absorption coefficient, and it is independent of the source internal structure. Ultimately the dose in heterogeneous media can be calculated as a product of dose in water as calculated by TG-43 protocol times the ICF. To validate the ICF methodology, dose absorbed in spherical phantoms with large tissue heterogeneities was compared using the TG-43 formalism corrected for heterogeneity versus Monte Carlo simulations. The agreement between Monte Carlo simulations and the ICF method remained within 5% in soft tissues up to several centimeters from a Pd-103 source. Compared to Monte Carlo, the ICF methods can easily be integrated into a clinical treatment planning system and it does not require the detailed internal structure of the source or the photon phase-space.

  19. TU-A-12A-07: CT-Based Biomarkers to Characterize Lung Lesion: Effects of CT Dose, Slice Thickness and Reconstruction Algorithm Based Upon a Phantom Study

    SciTech Connect

    Zhao, B; Tan, Y; Tsai, W; Lu, L; Schwartz, L; So, J; Goldman, J; Lu, Z

    2014-06-15

    Purpose: Radiogenomics promises the ability to study cancer tumor genotype from the phenotype obtained through radiographic imaging. However, little attention has been paid to the sensitivity of image features, the image-based biomarkers, to imaging acquisition techniques. This study explores the impact of CT dose, slice thickness and reconstruction algorithm on measuring image features using a thorax phantom. Methods: Twentyfour phantom lesions of known volume (1 and 2mm), shape (spherical, elliptical, lobular and spicular) and density (-630, -10 and +100 HU) were scanned on a GE VCT at four doses (25, 50, 100, and 200 mAs). For each scan, six image series were reconstructed at three slice thicknesses of 5, 2.5 and 1.25mm with continuous intervals, using the lung and standard reconstruction algorithms. The lesions were segmented with an in-house 3D algorithm. Fifty (50) image features representing lesion size, shape, edge, and density distribution/texture were computed. Regression method was employed to analyze the effect of CT dose, slice of thickness and reconstruction algorithm on these features adjusting 3 confounding factors (size, density and shape of phantom lesions). Results: The coefficients of CT dose, slice thickness and reconstruction algorithm are presented in Table 1 in the supplementary material. No significant difference was found between the image features calculated on low dose CT scans (25mAs and 50mAs). About 50% texture features were found statistically different between low doses and high doses (100 and 200mAs). Significant differences were found for almost all features when calculated on 1.25mm, 2.5mm, and 5mm slice thickness images. Reconstruction algorithms significantly affected all density-based image features, but not morphological features. Conclusions: There is a great need to standardize the CT imaging protocols for radiogenomics study because CT dose, slice thickness and reconstruction algorithm impact quantitative image features to

  20. A dose calculation algorithm with correction for proton-nucleus interactions in non-water materials for proton radiotherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Inaniwa, T.; Kanematsu, N.; Sato, S.; Kohno, R.

    2016-01-01

    In treatment planning for proton radiotherapy, the dose measured in water is applied to the patient dose calculation with density scaling by stopping power ratio {ρ\\text{S}} . Since the body tissues are chemically different from water, this approximation may cause dose calculation errors, especially due to differences in nuclear interactions. We proposed and validated an algorithm for correcting these errors. The dose in water is decomposed into three constituents according to the physical interactions of protons in water: the dose from primary protons continuously slowing down by electromagnetic interactions, the dose from protons scattered by elastic and/or inelastic interactions, and the dose resulting from nonelastic interactions. The proportions of the three dose constituents differ between body tissues and water. We determine correction factors for the proportion of dose constituents with Monte Carlo simulations in various standard body tissues, and formulated them as functions of their {ρ\\text{S}} for patient dose calculation. The influence of nuclear interactions on dose was assessed by comparing the Monte Carlo simulated dose and the uncorrected dose in common phantom materials. The influence around the Bragg peak amounted to  -6% for polytetrafluoroethylene and 0.3% for polyethylene. The validity of the correction method was confirmed by comparing the simulated and corrected doses in the materials. The deviation was below 0.8% for all materials. The accuracy of the correction factors derived with Monte Carlo simulations was separately verified through irradiation experiments with a 235 MeV proton beam using common phantom materials. The corrected doses agreed with the measurements within 0.4% for all materials except graphite. The influence on tumor dose was assessed in a prostate case. The dose reduction in the tumor was below 0.5%. Our results verify that this algorithm is practical and accurate for proton radiotherapy treatment planning, and

  1. SU-E-T-356: Accuracy of Eclipse Electron Macro Monte Carlo Dose Algorithm for Use in Bolus Electron Conformal Therapy

    SciTech Connect

    Carver, R; Popple, R; Benhabib, S; Antolak, J; Sprunger, C; Hogstrom, K

    2014-06-01

    Purpose: To evaluate the accuracy of electron dose distribution calculated by the Varian Eclipse electron Monte Carlo (eMC) algorithm for use with recent commercially available bolus electron conformal therapy (ECT). Methods: eMC-calculated electron dose distributions for bolus ECT have been compared to those previously measured for cylindrical phantoms (retromolar trigone and nose), whose axial cross sections were based on the mid-PTV CT anatomy for each site. The phantoms consisted of SR4 muscle substitute, SR4 bone substitute, and air. The bolus ECT treatment plans were imported into the Eclipse treatment planning system and calculated using the maximum allowable histories (2×10{sup 9}), resulting in a statistical error of <0.2%. Smoothing was not used for these calculations. Differences between eMC-calculated and measured dose distributions were evaluated in terms of absolute dose difference as well as distance to agreement (DTA). Results: Results from the eMC for the retromolar trigone phantom showed 89% (41/46) of dose points within 3% dose difference or 3 mm DTA. There was an average dose difference of −0.12% with a standard deviation of 2.56%. Results for the nose phantom showed 95% (54/57) of dose points within 3% dose difference or 3 mm DTA. There was an average dose difference of 1.12% with a standard deviation of 3.03%. Dose calculation times for the retromolar trigone and nose treatment plans were 15 min and 22 min, respectively, using 16 processors (Intel Xeon E5-2690, 2.9 GHz) on a Varian Eclipse framework agent server (FAS). Results of this study were consistent with those previously reported for accuracy of the eMC electron dose algorithm and for the .decimal, Inc. pencil beam redefinition algorithm used to plan the bolus. Conclusion: These results show that the accuracy of the Eclipse eMC algorithm is suitable for clinical implementation of bolus ECT.

  2. TH-E-BRE-11: Adaptive-Beamlet Based Finite Size Pencil Beam (AB-FSPB) Dose Calculation Algorithm for Independent Verification of IMRT and VMAT

    SciTech Connect

    Park, C; Arhjoul, L; Yan, G; Lu, B; Li, J; Liu, C

    2014-06-15

    Purpose: In current IMRT and VMAT settings, the use of sophisticated dose calculation procedure is inevitable in order to account complex treatment field created by MLCs. As a consequence, independent volumetric dose verification procedure is time consuming which affect the efficiency of clinical workflow. In this study, the authors present an efficient Pencil Beam based dose calculation algorithm that minimizes the computational procedure while preserving the accuracy. Methods: The computational time of Finite Size Pencil Beam (FSPB) algorithm is proportional to the number of infinitesimal identical beamlets that constitute the arbitrary field shape. In AB-FSPB, the dose distribution from each beamlet is mathematically modelled such that the sizes of beamlets to represent arbitrary field shape are no longer needed to be infinitesimal nor identical. In consequence, it is possible to represent arbitrary field shape with combinations of different sized and minimal number of beamlets. Results: On comparing FSPB with AB-FSPB, the complexity of the algorithm has been reduced significantly. For 25 by 25 cm2 squared shaped field, 1 beamlet of 25 by 25 cm2 was sufficient to calculate dose in AB-FSPB, whereas in conventional FSPB, minimum 2500 beamlets of 0.5 by 0.5 cm2 size were needed to calculate dose that was comparable to the Result computed from Treatment Planning System (TPS). The algorithm was also found to be GPU compatible to maximize its computational speed. On calculating 3D dose of IMRT (∼30 control points) and VMAT plan (∼90 control points) with grid size 2.0 mm (200 by 200 by 200), the dose could be computed within 3∼5 and 10∼15 seconds. Conclusion: Authors have developed an efficient Pencil Beam type dose calculation algorithm called AB-FSPB. The fast computation nature along with GPU compatibility has shown performance better than conventional FSPB. This completely enables the implantation of AB-FSPB in the clinical environment for independent

  3. An algorithm to evaluate solar irradiance and effective dose rates using spectral UV irradiance at four selected wavelengths.

    PubMed

    Anav, A; Rafanelli, C; Di Menno, I; Di Menno, M

    2004-01-01

    The paper shows a semi-analytical method for environmental and dosimetric applications to evaluate, in clear sky conditions, the solar irradiance and the effective dose rates for some action spectra using only four spectral irradiance values at selected wavelengths in the UV-B and UV-A regions (305, 320, 340 and 380 nm). The method, named WL4UV, is based on the reconstruction of an approximated spectral irradiance that can be integrated, to obtain the solar irradiance, or convoluted with an action spectrum to obtain an effective dose rate. The parameters required in the algorithm are deduced from archived solar spectral irradiance data. This database contains measurements carried out by some Brewer spectrophotometers located in various geographical positions, at similar altitudes, with very different environmental characteristics: Rome (Italy), Ny Alesund (Svalbard Islands, Norway) and Ushuaia (Tierra del Fuego, Argentina). To evaluate the precision of the method, a double test was performed with data not used in developing the model. Archived Brewer measurement data, in clear sky conditions, from Rome and from the National Science Foundation UV data set in San Diego (CA, USA) and Ushuaia, where SUV 100 spectroradiometers operate, were drawn randomly. The comparison of measured and computed irradiance has a relative deviation of about +/-2%. The effective dose rates for action spectra of Erythema, DNA and non-Melanoma skin cancer have a relative deviation of less than approximately 20% for solar zenith angles <50 degrees . PMID:15266087

  4. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    SciTech Connect

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-05-15

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the

  5. Accuracy of one algorithm used to modify a planned DVH with data from actual dose delivery.

    PubMed

    Ma, Tianjun; Podgorsak, Matthew B; Kumaraswamy, Lalith K

    2016-01-01

    Detection and accurate quantification of treatment delivery errors is important in radiation therapy. This study aims to evaluate the accuracy of DVH based QA in quantifying delivery errors. Eighteen previously treated VMAT plans (prostate, H&N, and brain) were randomly chosen for this study. Conventional IMRT delivery QA was done with the ArcCHECK diode detector for error-free plans and plans with the following modifications: 1) induced monitor unit differences up to ± 3.0%, 2) control point deletion (3, 5, and 8 control points were deleted for each arc), and 3) gantry angle shift (2° uniform shift clockwise and counterclockwise). 2D and 3D distance-to-agreement (DTA) analyses were performed for all plans with SNC Patient software and 3DVH software, respectively. Subsequently, accuracy of the reconstructed DVH curves and DVH parameters in 3DVH software were analyzed for all selected cases using the plans in the Eclipse treatment planning system as standard. 3D DTA analysis for error-induced plans generally gave high pass rates, whereas the 2D evaluation seemed to be more sensitive to detecting delivery errors. The average differences for DVH parameters between each pair of Eclipse recalculation and 3DVH prediction were within 2% for all three types of error-induced treatment plans. This illustrates that 3DVH accurately quantifies delivery errors in terms of actual dose delivered to the patients. 2D DTA analysis should be routinely used for clinical evaluation. Any concerns or dose discrepancies should be further analyzed through DVH-based QA for clinically relevant results and confirmation of a conventional passing-rate-based QA. PMID:27685140

  6. High-density dental implants and radiotherapy planning: evaluation of effects on dose distribution using pencil beam convolution algorithm and Monte Carlo method.

    PubMed

    Çatli, Serap

    2015-09-08

    High atomic number and density of dental implants leads to major problems at providing an accurate dose distribution in radiotherapy and contouring tumors and organs caused by the artifact in head and neck tumors. The limits and deficiencies of the algorithms using in the treatment planning systems can lead to large errors in dose calculation, and this may adversely affect the patient's treatment. In the present study, four commercial dental implants were used: pure titanium, titanium alloy (Ti-6Al-4V), amalgam, and crown. The effects of dental implants on dose distribution are determined with two methods: pencil beam convolution (PBC) algorithm and Monte Carlo code for 6 MV photon beam. The central axis depth doses were calculated on the phantom for a source-skin distance (SSD) of 100 cm and a 10 × 10 cm2 field using both of algorithms. The results of Monte Carlo method and Eclipse TPS were compared to each other and to those previously reported. In the present study, dose increases in tissue at a distance of 2 mm in front of the dental implants were seen due to the backscatter of electrons for dental implants at 6 MV using the Monte Carlo method. The Eclipse treatment planning system (TPS) couldn't precisely account for the backscatter radiation caused by the dental prostheses. TPS underestimated the back scatter dose and overestimated the dose after the dental implants. The large errors found for TPS in this study are due to the limits and deficiencies of the algorithms. The accuracy of the PBC algorithm of Eclipse TPS was evaluated in comparison to Monte Carlo calculations in consideration of the recommendations of the American Association of Physicists in Medicine Radiation Therapy Committee Task Group 65. From the comparisons of the TPS and Monte Carlo calculations, it is verified that the Monte Carlo simulation is a good approach to derive the dose distribution in heterogeneous media.

  7. Cardiac computed tomography radiation dose reduction using interior reconstruction algorithm with the aorta and vertebra as known information.

    PubMed

    Bharkhada, Deepak; Yu, Hengyong; Ge, Shuping; Carr, J Jeffrey; Wang, Ge

    2009-01-01

    High x-ray radiation dose is a major public concern with the increasing use of multidetector computed tomography (CT) for diagnosis of cardiovascular diseases. This issue must be effectively addressed by dose-reduction techniques. Recently, our group proved that an internal region of interest (ROI) can be exactly reconstructed solely from localized projections if a small subregion within the ROI is known. In this article, we propose to use attenuation values of the blood in aorta and vertebral bone to serve as the known information for localized cardiac CT. First, we describe a novel interior tomography approach that backprojects differential fan-beam or parallel-beam projections to obtain the Hilbert transform and then reconstructs the original image in an ROI using the iterative projection onto convex sets algorithm. Then, we develop a numerical phantom based on clinical cardiac CT images for simulations. Our results demonstrate that it is feasible to use practical prior information and exactly reconstruct cardiovascular structures only from projection data along x-ray paths through the ROI.

  8. Influence of model based iterative reconstruction algorithm on image quality of multiplanar reformations in reduced dose chest CT

    PubMed Central

    Dunet, Vincent; Hachulla, Anne-Lise; Grimm, Jochen; Beigelman-Aubry, Catherine

    2016-01-01

    Background Model-based iterative reconstruction (MBIR) reduces image noise and improves image quality (IQ) but its influence on post-processing tools including maximal intensity projection (MIP) and minimal intensity projection (mIP) remains unknown. Purpose To evaluate the influence on IQ of MBIR on native, mIP, MIP axial and coronal reformats of reduced dose computed tomography (RD-CT) chest acquisition. Material and Methods Raw data of 50 patients, who underwent a standard dose CT (SD-CT) and a follow-up RD-CT with a CT dose index (CTDI) of 2–3 mGy, were reconstructed by MBIR and FBP. Native slices, 4-mm-thick MIP, and 3-mm-thick mIP axial and coronal reformats were generated. The relative IQ, subjective IQ, image noise, and number of artifacts were determined in order to compare different reconstructions of RD-CT with reference SD-CT. Results The lowest noise was observed with MBIR. RD-CT reconstructed by MBIR exhibited the best relative and subjective IQ on coronal view regardless of the post-processing tool. MBIR generated the lowest rate of artefacts on coronal mIP/MIP reformats and the highest one on axial reformats, mainly represented by distortions and stairsteps artifacts. Conclusion The MBIR algorithm reduces image noise but generates more artifacts than FBP on axial mIP and MIP reformats of RD-CT. Conversely, it significantly improves IQ on coronal views, without increasing artifacts, regardless of the post-processing technique.

  9. Verification measurements and clinical evaluation of the iPlan RT Monte Carlo dose algorithm for 6 MV photon energy

    NASA Astrophysics Data System (ADS)

    Petoukhova, A. L.; van Wingerden, K.; Wiggenraad, R. G. J.; van de Vaart, P. J. M.; van Egmond, J.; Franken, E. M.; van Santvoort, J. P. C.

    2010-08-01

    This study presents data for verification of the iPlan RT Monte Carlo (MC) dose algorithm (BrainLAB, Feldkirchen, Germany). MC calculations were compared with pencil beam (PB) calculations and verification measurements in phantoms with lung-equivalent material, air cavities or bone-equivalent material to mimic head and neck and thorax and in an Alderson anthropomorphic phantom. Dosimetric accuracy of MC for the micro-multileaf collimator (MLC) simulation was tested in a homogeneous phantom. All measurements were performed using an ionization chamber and Kodak EDR2 films with Novalis 6 MV photon beams. Dose distributions measured with film and calculated with MC in the homogeneous phantom are in excellent agreement for oval, C and squiggle-shaped fields and for a clinical IMRT plan. For a field with completely closed MLC, MC is much closer to the experimental result than the PB calculations. For fields larger than the dimensions of the inhomogeneities the MC calculations show excellent agreement (within 3%/1 mm) with the experimental data. MC calculations in the anthropomorphic phantom show good agreement with measurements for conformal beam plans and reasonable agreement for dynamic conformal arc and IMRT plans. For 6 head and neck and 15 lung patients a comparison of the MC plan with the PB plan was performed. Our results demonstrate that MC is able to accurately predict the dose in the presence of inhomogeneities typical for head and neck and thorax regions with reasonable calculation times (5-20 min). Lateral electron transport was well reproduced in MC calculations. We are planning to implement MC calculations for head and neck and lung cancer patients.

  10. Influence of model based iterative reconstruction algorithm on image quality of multiplanar reformations in reduced dose chest CT

    PubMed Central

    Dunet, Vincent; Hachulla, Anne-Lise; Grimm, Jochen; Beigelman-Aubry, Catherine

    2016-01-01

    Background Model-based iterative reconstruction (MBIR) reduces image noise and improves image quality (IQ) but its influence on post-processing tools including maximal intensity projection (MIP) and minimal intensity projection (mIP) remains unknown. Purpose To evaluate the influence on IQ of MBIR on native, mIP, MIP axial and coronal reformats of reduced dose computed tomography (RD-CT) chest acquisition. Material and Methods Raw data of 50 patients, who underwent a standard dose CT (SD-CT) and a follow-up RD-CT with a CT dose index (CTDI) of 2–3 mGy, were reconstructed by MBIR and FBP. Native slices, 4-mm-thick MIP, and 3-mm-thick mIP axial and coronal reformats were generated. The relative IQ, subjective IQ, image noise, and number of artifacts were determined in order to compare different reconstructions of RD-CT with reference SD-CT. Results The lowest noise was observed with MBIR. RD-CT reconstructed by MBIR exhibited the best relative and subjective IQ on coronal view regardless of the post-processing tool. MBIR generated the lowest rate of artefacts on coronal mIP/MIP reformats and the highest one on axial reformats, mainly represented by distortions and stairsteps artifacts. Conclusion The MBIR algorithm reduces image noise but generates more artifacts than FBP on axial mIP and MIP reformats of RD-CT. Conversely, it significantly improves IQ on coronal views, without increasing artifacts, regardless of the post-processing technique. PMID:27635253

  11. From prompt gamma distribution to dose: a novel approach combining an evolutionary algorithm and filtering based on Gaussian-powerlaw convolutions

    NASA Astrophysics Data System (ADS)

    Schumann, A.; Priegnitz, M.; Schoene, S.; Enghardt, W.; Rohling, H.; Fiedler, F.

    2016-10-01

    Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.

  12. A novel lateral disequilibrium inclusive (LDI) pencil-beam based dose calculation algorithm: Evaluation in inhomogeneous phantoms and comparison with Monte Carlo calculations

    SciTech Connect

    Wertz, Hansjoerg; Jahnke, Lennart; Schneider, Frank; Polednik, Martin; Fleckenstein, Jens; Lohr, Frank; Wenz, Frederik

    2011-03-15

    Purpose: Pencil-beam (PB) based dose calculation for treatment planning is limited by inaccuracies in regions of tissue inhomogeneities, particularly in situations with lateral electron disequilibrium as is present at tissue/lung interfaces. To overcome these limitations, a new ''lateral disequilibrium inclusive'' (LDI) PB based calculation algorithm was introduced. In this study, the authors evaluated the accuracy of the new model by film and ionization chamber measurements and Monte Carlo simulations. Methods: To validate the performance of the new LDI algorithm implemented in Corvus 09, eight test plans were generated on inhomogeneous thorax and pelvis phantoms. In addition, three plans were calculated with a simple effective path length (EPL) algorithm on the inhomogeneous thorax phantom. To simulate homogeneous tissues, four test plans were evaluated in homogeneous phantoms (homogeneous dose calculation). Results: The mean pixel pass rates and standard deviations of the gamma 4%/4 mm test for the film measurements were (96{+-}3)% for the plans calculated with LDI, (70{+-}5)% for the plans calculated with EPL, and (99{+-}1)% for the homogeneous plans. Ionization chamber measurements and Monte Carlo simulations confirmed the high accuracy of the new algorithm (dose deviations {<=}4%; gamma 3%/3 mm {>=}96%)Conclusions: LDI represents an accurate and fast dose calculation algorithm for treatment planning.

  13. Whole-body CT-based imaging algorithm for multiple trauma patients: radiation dose and time to diagnosis

    PubMed Central

    Gordic, S; Hodel, S; Simmen, H-P; Brueesch, M; Frauenfelder, T; Wanner, G; Sprengel, K

    2015-01-01

    Objective: To determine the number of imaging examinations, radiation dose and the time to complete trauma-related imaging in multiple trauma patients before and after introduction of whole-body CT (WBCT) into early trauma care. Methods: 120 consecutive patients before and 120 patients after introduction of WBCT into the trauma algorithm of the University Hospital Zurich were compared regarding the number and type of CT, radiography, focused assessment with sonography for trauma (FAST), additional CT examinations (defined as CT of the same body regions after radiography and/or FAST) and the time to complete trauma-related imaging. Results: In the WBCT cohort, significantly more patients underwent CT of the head, neck, chest and abdomen (p < 0.001) than in the non-WBCT cohort, whereas the number of radiographic examinations of the cervical spine, chest and pelvis and of FAST examinations were significantly lower (p < 0.001). There were no significant differences between cohorts regarding the number of radiographic examinations of the upper (p = 0.56) and lower extremities (p = 0.30). We found significantly higher effective doses in the WBCT (29.5 mSv) than in the non-WBCT cohort (15.9 mSv; p < 0.001), but fewer additional CT examinations for completing the work-up were needed in the WBCT cohort (p < 0.001). The time to complete trauma-related imaging was significantly shorter in the WBCT (12 min) than in the non-WBCT cohort (75 min; p < 0.001). Conclusion: Including WBCT in the initial work-up of trauma patients results in higher radiation doses, but fewer additional CT examinations are needed, and the time for completing trauma-related imaging is shorter. Advances in knowledge: WBCT in trauma patients is associated with a high radiation dose of 29.5 mSv. PMID:25594105

  14. SU-E-T-626: Accuracy of Dose Calculation Algorithms in MultiPlan Treatment Planning System in Presence of Heterogeneities

    SciTech Connect

    Moignier, C; Huet, C; Barraux, V; Loiseau, C; Sebe-Mercier, K; Batalla, A; Makovicka, L

    2014-06-15

    Purpose: Advanced stereotactic radiotherapy (SRT) treatments require accurate dose calculation for treatment planning especially for treatment sites involving heterogeneous patient anatomy. The purpose of this study was to evaluate the accuracy of dose calculation algorithms, Raytracing and Monte Carlo (MC), implemented in the MultiPlan treatment planning system (TPS) in presence of heterogeneities. Methods: First, the LINAC of a CyberKnife radiotherapy facility was modeled with the PENELOPE MC code. A protocol for the measurement of dose distributions with EBT3 films was established and validated thanks to comparison between experimental dose distributions and calculated dose distributions obtained with MultiPlan Raytracing and MC algorithms as well as with the PENELOPE MC model for treatments planned with the homogenous Easycube phantom. Finally, bones and lungs inserts were used to set up a heterogeneous Easycube phantom. Treatment plans with the 10, 7.5 or the 5 mm field sizes were generated in Multiplan TPS with different tumor localizations (in the lung and at the lung/bone/soft tissue interface). Experimental dose distributions were compared to the PENELOPE MC and Multiplan calculations using the gamma index method. Results: Regarding the experiment in the homogenous phantom, 100% of the points passed for the 3%/3mm tolerance criteria. These criteria include the global error of the method (CT-scan resolution, EBT3 dosimetry, LINAC positionning …), and were used afterwards to estimate the accuracy of the MultiPlan algorithms in heterogeneous media. Comparison of the dose distributions obtained in the heterogeneous phantom is in progress. Conclusion: This work has led to the development of numerical and experimental dosimetric tools for small beam dosimetry. Raytracing and MC algorithms implemented in MultiPlan TPS were evaluated in heterogeneous media.

  15. SU-E-T-520: Four-Dimensional Dose Calculation Algorithm Considering Variations in Dose Distribution Induced by Sinusoidal One-Dimensional Motion Patterns

    SciTech Connect

    Taguenang, J; Algan, O; Ahmad, S; Ali, I

    2014-06-01

    Purpose: To investigate quantitatively the variations in dose-distributions induced by motion by measurements and modeling. A four-dimensional (4D) motion model of dose distributions that accounts for different motion parameters was developed. Methods: Variations in dose distributions induced by sinusoidal phantom motion were measured using a multiple-diode-array-detector (MapCheck2). MapCheck2 was mounted on a mobile platform that moves with adjustable calibrated motion patterns in the superior-inferior direction. Various plans including open and intensity-modulated fields were used to irradiate MapCheck2. A motion model was developed to predict spatial and temporal variations in the dose-distributions and dependence on the motion parameters using pencil-beam spread-out superposition function. This model used the superposition of pencil-beams weighted with a probability function extracted from the motion trajectory. The model was verified with measured dose-distributions obtained from MapCheck2. Results: Dose-distribution varied considerably with motion where in the regions between isocenter and 50% isodose-line, dose decreased with increase of the motion amplitude. Dose levels increased with increase in the motion amplitude in the region beyond 50% isodose-line. When the range of motion (ROM=twice amplitude) was smaller than the field length both central axis dose and the 50% isodose-line did not change with variation of motion amplitude and remained equal to the dose of stationary phantom. As ROM became larger than the field length, the dose level decreased at central axis dose and 50% isodose-line. Motion frequency and phase did not affect the dose distributions which were delivered over an extended time longer than few motion cycles, however, they played an important role for doses delivered with high-dose-rates within one motion cycle . Conclusion: A 4D-dose motion model was developed to predict and correct variations in dose distributions induced by one

  16. Evaluation of the influence of tumor location and size on the difference of dose calculation between Ray Tracing algorithm and Fast Monte Carlo algorithm in stereotactic body radiotherapy of non-small cell lung cancer using CyberKnife.

    PubMed

    Wu, Vincent W C; Tam, Kwok-wah; Tong, Shun-ming

    2013-09-06

    This study evaluated the extent of improvement in dose predication accuracy achieved by the Fast Monte Carlo algorithm (MC) compared to the Ray Tracing algorithm (RAT) in stereotactic body radiotherapy (SBRT) of non-small cell lung cancer (NSCLC), and how their differences were influenced by the tumor site and size. Thirty-three NSCLC patients treated with SBRT by CyberKnife in 2011 were recruited. They were divided into the central target group (n = 17) and peripheral target group (n = 16) according to the RTOG 0236 guidelines. Each group was further divided into the large and small target subgroups. After the computation of treatment plans using RAT, a MC plan was generated using the same patient data and treatment parameters. Apart from the target reference point dose measurements, various dose parameters for the planning target volume (PTV) and organs at risk (OARs) were assessed. In addition, the "Fractional Deviation" (FDev) was also calculated for comparison, which was defined as the ratio of the RAT and MC values. For peripheral lung cases, RAT produced significantly higher dose values in all the reference points than MC. The FDev of all reference point doses and dose parameters was greater in the small target than the large target subgroup. For central lung cases, there was no significant reference point and OAR dose differences between RAT and MC. When comparing between the small target and large target subgroups, the FDev values of all the dose parameters and reference point doses did not show significant difference. Despite the shorter computation time, RAT was inferior to MC, in which the target dose was usually overestimated. RAT would not be recommended for SBRT of peripheral lung tumors regardless of the target size. However, it could be considered for large central lung tumors because its performance was comparable to MC.

  17. Genotype-guided versus standard vitamin K antagonist dosing algorithms in patients initiating anticoagulation. A systematic review and meta-analysis.

    PubMed

    Belley-Cote, Emilie P; Hanif, Hasib; D'Aragon, Frederick; Eikelboom, John W; Anderson, Jeffrey L; Borgman, Mark; Jonas, Daniel E; Kimmel, Stephen E; Manolopoulos, Vangelis G; Baranova, Ekaterina; Maitland-van der Zee, Anke H; Pirmohamed, Munir; Whitlock, Richard P

    2015-10-01

    Variability in vitamin K antagonist (VKA) dosing is partially explained by genetic polymorphisms. We performed a meta-analysis to determine whether genotype-guided VKA dosing algorithms decrease a composite of death, thromboembolic events and major bleeding (primary outcome) and improve time in therapeutic range (TTR). We searched MEDLINE, EMBASE, CENTRAL, trial registries and conference proceedings for randomised trials comparing genotype-guided and standard (non genotype-guided) VKA dosing algorithms in adults initiating anticoagulation. Data were pooled using a random effects model. Of the 12 included studies (3,217 patients), six reported all components of the primary outcome of mortality, thromboembolic events and major bleeding (2,223 patients, 87 events). Our meta-analysis found no significant difference between groups for the primary outcome (relative risk 0.85, 95% confidence interval [CI] 0.54-1.34; heterogeneity Χ(²)=4.46, p=0.35, I(²)=10%). Based on 10 studies (2,767 patients), TTR was significantly higher in the genotype-guided group (mean difference (MD) 4.31%; 95% CI 0.35, 8.26; heterogeneity Χ(²)=43.31, p<0.001, I(²)=79%). Pre-specified exploratory analyses demonstrated that TTR was significantly higher when genotype-guided dosing was compared with fixed VKA dosing (6 trials, 997 patients: MD 8.41%; 95% CI 3.50,13.31; heterogeneity Χ(²)=15.18, p=0.01, I(²)=67%) but not when compared with clinical algorithm-guided dosing (4 trials, 1,770 patients: MD -0.29%; 95% CI -2.48,1.90; heterogeneity Χ(²)=1.53, p=0.68, I(²)=0%; p for interaction=0.002). In conclusion, genotype-guided compared with standard VKA dosing algorithms were not found to decrease a composite of death, thromboembolism and major bleeding, but did result in improved TTR. An improvement in TTR was observed in comparison with fixed VKA dosing algorithms, but not with clinical algorithms.

  18. Incorporating an Exercise Detection, Grading, and Hormone Dosing Algorithm Into the Artificial Pancreas Using Accelerometry and Heart Rate

    PubMed Central

    Jacobs, Peter G.; Resalat, Navid; El Youssef, Joseph; Reddy, Ravi; Branigan, Deborah; Preiser, Nicholas; Condon, John; Castle, Jessica

    2015-01-01

    In this article, we present several important contributions necessary for enabling an artificial endocrine pancreas (AP) system to better respond to exercise events. First, we show how exercise can be automatically detected using body-worn accelerometer and heart rate sensors. During a 22 hour overnight inpatient study, 13 subjects with type 1 diabetes wearing a Zephyr accelerometer and heart rate monitor underwent 45 minutes of mild aerobic treadmill exercise while controlling their glucose levels using sensor-augmented pump therapy. We used the accelerometer and heart rate as inputs into a validated regression model. Using this model, we were able to detect the exercise event with a sensitivity of 97.2% and a specificity of 99.5%. Second, from this same study, we show how patients’ glucose declined during the exercise event and we present results from in silico modeling that demonstrate how including an exercise model in the glucoregulatory model improves the estimation of the drop in glucose during exercise. Last, we present an exercise dosing adjustment algorithm and describe parameter tuning and performance using an in silico glucoregulatory model during an exercise event. PMID:26438720

  19. SU-E-I-05: A Correction Algorithm for Kilovoltage Cone-Beam Computed Tomography Dose Calculations in Cervical Cancer Patients

    SciTech Connect

    Zhang, J; Zhang, W; Lu, J

    2015-06-15

    Purpose: To investigate the accuracy and feasibility of dose calculations using kilovoltage cone beam computed tomography in cervical cancer radiotherapy using a correction algorithm. Methods: The Hounsfield units (HU) and electron density (HU-density) curve was obtained for both planning CT (pCT) and kilovoltage cone beam CT (CBCT) using a CIRS-062 calibration phantom. The pCT and kV-CBCT images have different HU values, and if the HU-density curve of CBCT was directly used to calculate dose in CBCT images may have a deviation on dose distribution. It is necessary to normalize the different HU values between pCT and CBCT. A HU correction algorithm was used for CBCT images (cCBCT). Fifteen intensity-modulated radiation therapy (IMRT) plans of cervical cancer were chosen, and the plans were transferred to the pCT and cCBCT data sets without any changes for dose calculations. Phantom and patient studies were carried out. The dose differences and dose distributions were compared between cCBCT plan and pCT plan. Results: The HU number of CBCT was measured by several times, and the maximum change was less than 2%. To compare with pCT, the CBCT and cCBCT has a discrepancy, the dose differences in CBCT and cCBCT images were 2.48%±0.65% (range: 1.3%∼3.8%) and 0.48%±0.21% (range: 0.1%∼0.82%) for phantom study, respectively. For dose calculation in patient images, the dose differences were 2.25%±0.43% (range: 1.4%∼3.4%) and 0.63%±0.35% (range: 0.13%∼0.97%), respectively. And for the dose distributions, the passing rate of cCBCT was higher than the CBCTs. Conclusion: The CBCT image for dose calculation is feasible in cervical cancer radiotherapy, and the correction algorithm offers acceptable accuracy. It will become a useful tool for adaptive radiation therapy.

  20. Experimental study on the application of a compressed-sensing (CS) algorithm to dental cone-beam CT (CBCT) for accurate, low-dose image reconstruction

    NASA Astrophysics Data System (ADS)

    Oh, Jieun; Cho, Hyosung; Je, Uikyu; Lee, Minsik; Kim, Hyojeong; Hong, Daeki; Park, Yeonok; Lee, Seonhwa; Cho, Heemoon; Choi, Sungil; Koo, Yangseo

    2013-03-01

    In practical applications of three-dimensional (3D) tomographic imaging, there are often challenges for image reconstruction from insufficient data. In computed tomography (CT); for example, image reconstruction from few views would enable fast scanning with reduced doses to the patient. In this study, we investigated and implemented an efficient reconstruction method based on a compressed-sensing (CS) algorithm, which exploits the sparseness of the gradient image with substantially high accuracy, for accurate, low-dose dental cone-beam CT (CBCT) reconstruction. We applied the algorithm to a commercially-available dental CBCT system (Expert7™, Vatech Co., Korea) and performed experimental works to demonstrate the algorithm for image reconstruction in insufficient sampling problems. We successfully reconstructed CBCT images from several undersampled data and evaluated the reconstruction quality in terms of the universal-quality index (UQI). Experimental demonstrations of the CS-based reconstruction algorithm appear to show that it can be applied to current dental CBCT systems for reducing imaging doses and improving the image quality.

  1. Dosimetric verification and clinical evaluation of a new commercially available Monte Carlo-based dose algorithm for application in stereotactic body radiation therapy (SBRT) treatment planning

    NASA Astrophysics Data System (ADS)

    Fragoso, Margarida; Wen, Ning; Kumar, Sanath; Liu, Dezhi; Ryu, Samuel; Movsas, Benjamin; Munther, Ajlouni; Chetty, Indrin J.

    2010-08-01

    Modern cancer treatment techniques, such as intensity-modulated radiation therapy (IMRT) and stereotactic body radiation therapy (SBRT), have greatly increased the demand for more accurate treatment planning (structure definition, dose calculation, etc) and dose delivery. The ability to use fast and accurate Monte Carlo (MC)-based dose calculations within a commercial treatment planning system (TPS) in the clinical setting is now becoming more of a reality. This study describes the dosimetric verification and initial clinical evaluation of a new commercial MC-based photon beam dose calculation algorithm, within the iPlan v.4.1 TPS (BrainLAB AG, Feldkirchen, Germany). Experimental verification of the MC photon beam model was performed with film and ionization chambers in water phantoms and in heterogeneous solid-water slabs containing bone and lung-equivalent materials for a 6 MV photon beam from a Novalis (BrainLAB) linear accelerator (linac) with a micro-multileaf collimator (m3 MLC). The agreement between calculated and measured dose distributions in the water phantom verification tests was, on average, within 2%/1 mm (high dose/high gradient) and was within ±4%/2 mm in the heterogeneous slab geometries. Example treatment plans in the lung show significant differences between the MC and one-dimensional pencil beam (PB) algorithms within iPlan, especially for small lesions in the lung, where electronic disequilibrium effects are emphasized. Other user-specific features in the iPlan system, such as options to select dose to water or dose to medium, and the mean variance level, have been investigated. Timing results for typical lung treatment plans show the total computation time (including that for processing and I/O) to be less than 10 min for 1-2% mean variance (running on a single PC with 8 Intel Xeon X5355 CPUs, 2.66 GHz). Overall, the iPlan MC algorithm is demonstrated to be an accurate and efficient dose algorithm, incorporating robust tools for MC

  2. SU-E-J-109: Evaluation of Deformable Accumulated Parotid Doses Using Different Registration Algorithms in Adaptive Head and Neck Radiotherapy

    SciTech Connect

    Xu, S; Liu, B

    2015-06-15

    Purpose: Three deformable image registration (DIR) algorithms are utilized to perform deformable dose accumulation for head and neck tomotherapy treatment, and the differences of the accumulated doses are evaluated. Methods: Daily MVCT data for 10 patients with pathologically proven nasopharyngeal cancers were analyzed. The data were acquired using tomotherapy (TomoTherapy, Accuray) at the PLA General Hospital. The prescription dose to the primary target was 70Gy in 33 fractions.Three DIR methods (B-spline, Diffeomorphic Demons and MIMvista) were used to propagate parotid structures from planning CTs to the daily CTs and accumulate fractionated dose on the planning CTs. The mean accumulated doses of parotids were quantitatively compared and the uncertainties of the propagated parotid contours were evaluated using Dice similarity index (DSI). Results: The planned mean dose of the ipsilateral parotids (32.42±3.13Gy) was slightly higher than those of the contralateral parotids (31.38±3.19Gy)in 10 patients. The difference between the accumulated mean doses of the ipsilateral parotids in the B-spline, Demons and MIMvista deformation algorithms (36.40±5.78Gy, 34.08±6.72Gy and 33.72±2.63Gy ) were statistically significant (B-spline vs Demons, P<0.0001, B-spline vs MIMvista, p =0.002). And The difference between those of the contralateral parotids in the B-spline, Demons and MIMvista deformation algorithms (34.08±4.82Gy, 32.42±4.80Gy and 33.92±4.65Gy ) were also significant (B-spline vs Demons, p =0.009, B-spline vs MIMvista, p =0.074). For the DSI analysis, the scores of B-spline, Demons and MIMvista DIRs were 0.90, 0.89 and 0.76. Conclusion: Shrinkage of parotid volumes results in the dose increase to the parotid glands in adaptive head and neck radiotherapy. The accumulated doses of parotids show significant difference using the different DIR algorithms between kVCT and MVCT. Therefore, the volume-based criterion (i.e. DSI) as a quantitative evaluation of

  3. Extrapolation algorithm to Forecast the Dynamics of Accumulation of the Absorbed Dose at the International Space Station, according to the Radiation Monitoring System Data

    NASA Astrophysics Data System (ADS)

    Lishnevskii, Andrey

    The ISS service module is equipped with the radiation monitoring system (RMS) which provides data for the daily estimation of the radiation environment on board the station. The sensitive elements of the RMS are silicon semiconductor detectors and ionization chambers. The data obtained in quiet radiation environment allowed to determine the contribution to the absorbed radiation dose due to galactic cosmic rays and the Earth’s inner radiation belt. The corresponding analysis was conducted for the 2005-2011 period. As a result empirical relations were obtained allowing to calculate the dose for one crossing of the area of the South Atlantic Anomaly. The initial parameters for the calculation are longitude and altitude on which the ISS trajectory crosses this area. The obtained empirical relations allowed to develop a simple calculation algorithm for the short-term forecasting of the dynamics of accumulation of the radiation dose at the ISS which is based on the assumption that the current level of contribution to the daily dose of galactic cosmic rays and the structure of the Earth’s inner radiation belt at the station flight altitude remains unchanged within a few days. The results of the analysis of the ISS RMS data which was conducted using the developed calculation algorithm for the period from 2005 to 2011 (the period in which solar cycle 23 ended and solar cycle 24 began) showed the possibility to implement a short-term (1-2 days) forecast of the dynamics of accumulation of the dose on board the station with an acceptable error (of no more than 30 percent). Besides, the developed forecast algorithm for the growth phase of the 24th solar cycle (2011-2014) was verified. The algorithm developed for forecasting the radiation environment may be used to process and analyse the current RMS information when providing effective radiation safety for the ISS crew.

  4. Inverse determination of the penalty parameter in penalized weighted least-squares algorithm for noise reduction of low-dose CBCT

    SciTech Connect

    Wang, Jing; Guan, Huaiqun; Solberg, Timothy

    2011-07-15

    Purpose: A statistical projection restoration algorithm based on the penalized weighted least-squares (PWLS) criterion can substantially improve the image quality of low-dose CBCT images. The performance of PWLS is largely dependent on the choice of the penalty parameter. Previously, the penalty parameter was chosen empirically by trial and error. In this work, the authors developed an inverse technique to calculate the penalty parameter in PWLS for noise suppression of low-dose CBCT in image guided radiotherapy (IGRT). Methods: In IGRT, a daily CBCT is acquired for the same patient during a treatment course. In this work, the authors acquired the CBCT with a high-mAs protocol for the first session and then a lower mAs protocol for the subsequent sessions. The high-mAs projections served as the goal (ideal) toward, which the low-mAs projections were to be smoothed by minimizing the PWLS objective function. The penalty parameter was determined through an inverse calculation of the derivative of the objective function incorporating both the high and low-mAs projections. Then the parameter obtained can be used for PWLS to smooth the noise in low-dose projections. CBCT projections for a CatPhan 600 and an anthropomorphic head phantom, as well as for a brain patient, were used to evaluate the performance of the proposed technique. Results: The penalty parameter in PWLS was obtained for each CBCT projection using the proposed strategy. The noise in the low-dose CBCT images reconstructed from the smoothed projections was greatly suppressed. Image quality in PWLS-processed low-dose CBCT was comparable to its corresponding high-dose CBCT. Conclusions: A technique was proposed to estimate the penalty parameter for PWLS algorithm. It provides an objective and efficient way to obtain the penalty parameter for image restoration algorithms that require predefined smoothing parameters.

  5. PIVET rFSH dosing algorithms for individualized controlled ovarian stimulation enables optimized pregnancy productivity rates and avoidance of ovarian hyperstimulation syndrome.

    PubMed

    Yovich, John L; Alsbjerg, Birgit; Conceicao, Jason L; Hinchliffe, Peter M; Keane, Kevin N

    2016-01-01

    The first PIVET algorithm for individualized recombinant follicle stimulating hormone (rFSH) dosing in in vitro fertilization, reported in 2012, was based on age and antral follicle count grading with adjustments for anti-Müllerian hormone level, body mass index, day-2 FSH, and smoking history. In 2007, it was enabled by the introduction of a metered rFSH pen allowing small dosage increments of ~8.3 IU per click. In 2011, a second rFSH pen was introduced allowing more precise dosages of 12.5 IU per click, and both pens with their individual algorithms have been applied continuously at our clinic. The objective of this observational study was to validate the PIVET algorithms pertaining to the two rFSH pens with the aim of collecting ≤15 oocytes and minimizing the risk of ovarian hyperstimulation syndrome. The data set included 2,822 in vitro fertilization stimulations over a 6-year period until April 2014 applying either of the two individualized dosing algorithms and corresponding pens. The main outcome measures were mean oocytes retrieved and resultant embryos designated for transfer or cryopreservation permitted calculation of oocyte and embryo utilization rates. Ensuing pregnancies were tracked until live births, and live birth productivity rates embracing fresh and frozen transfers were calculated. Overall, the results showed that mean oocyte numbers were 10.0 for all women <40 years with 24% requiring rFSH dosages <150 IU. Applying both specific algorithms in our clinic meant that the starting dose was not altered for 79.1% of patients and for 30.1% of those receiving the very lowest rFSH dosages (≤75 IU). Only 0.3% patients were diagnosed with severe ovarian hyperstimulation syndrome, all deemed avoidable due to definable breaches from the protocols. The live birth productivity rates exceeded 50% for women <35 years and was 33.2% for the group aged 35-39 years. Routine use of both algorithms led to only 11.6% of women generating >15 oocytes

  6. PIVET rFSH dosing algorithms for individualized controlled ovarian stimulation enables optimized pregnancy productivity rates and avoidance of ovarian hyperstimulation syndrome

    PubMed Central

    Yovich, John L; Alsbjerg, Birgit; Conceicao, Jason L; Hinchliffe, Peter M; Keane, Kevin N

    2016-01-01

    The first PIVET algorithm for individualized recombinant follicle stimulating hormone (rFSH) dosing in in vitro fertilization, reported in 2012, was based on age and antral follicle count grading with adjustments for anti-Müllerian hormone level, body mass index, day-2 FSH, and smoking history. In 2007, it was enabled by the introduction of a metered rFSH pen allowing small dosage increments of ~8.3 IU per click. In 2011, a second rFSH pen was introduced allowing more precise dosages of 12.5 IU per click, and both pens with their individual algorithms have been applied continuously at our clinic. The objective of this observational study was to validate the PIVET algorithms pertaining to the two rFSH pens with the aim of collecting ≤15 oocytes and minimizing the risk of ovarian hyperstimulation syndrome. The data set included 2,822 in vitro fertilization stimulations over a 6-year period until April 2014 applying either of the two individualized dosing algorithms and corresponding pens. The main outcome measures were mean oocytes retrieved and resultant embryos designated for transfer or cryopreservation permitted calculation of oocyte and embryo utilization rates. Ensuing pregnancies were tracked until live births, and live birth productivity rates embracing fresh and frozen transfers were calculated. Overall, the results showed that mean oocyte numbers were 10.0 for all women <40 years with 24% requiring rFSH dosages <150 IU. Applying both specific algorithms in our clinic meant that the starting dose was not altered for 79.1% of patients and for 30.1% of those receiving the very lowest rFSH dosages (≤75 IU). Only 0.3% patients were diagnosed with severe ovarian hyperstimulation syndrome, all deemed avoidable due to definable breaches from the protocols. The live birth productivity rates exceeded 50% for women <35 years and was 33.2% for the group aged 35–39 years. Routine use of both algorithms led to only 11.6% of women generating >15 oocytes

  7. SU-C-207-05: A Comparative Study of Noise-Reduction Algorithms for Low-Dose Cone-Beam Computed Tomography

    SciTech Connect

    Mukherjee, S; Yao, W

    2015-06-15

    Purpose: To study different noise-reduction algorithms and to improve the image quality of low dose cone beam CT for patient positioning in radiation therapy. Methods: In low-dose cone-beam CT, the reconstructed image is contaminated with excessive quantum noise. In this study, three well-developed noise reduction algorithms namely, a) penalized weighted least square (PWLS) method, b) split-Bregman total variation (TV) method, and c) compressed sensing (CS) method were studied and applied to the images of a computer–simulated “Shepp-Logan” phantom and a physical CATPHAN phantom. Up to 20% additive Gaussian noise was added to the Shepp-Logan phantom. The CATPHAN phantom was scanned by a Varian OBI system with 100 kVp, 4 ms and 20 mA. For comparing the performance of these algorithms, peak signal-to-noise ratio (PSNR) of the denoised images was computed. Results: The algorithms were shown to have the potential in reducing the noise level for low-dose CBCT images. For Shepp-Logan phantom, an improvement of PSNR of 2 dB, 3.1 dB and 4 dB was observed using PWLS, TV and CS respectively, while for CATPHAN, the improvement was 1.2 dB, 1.8 dB and 2.1 dB, respectively. Conclusion: Penalized weighted least square, total variation and compressed sensing methods were studied and compared for reducing the noise on a simulated phantom and a physical phantom scanned by low-dose CBCT. The techniques have shown promising results for noise reduction in terms of PSNR improvement. However, reducing the noise without compromising the smoothness and resolution of the image needs more extensive research.

  8. 3D-weighted cone beam filtered backprojection (CB-FBP) algorithm for image reconstruction at low-helical pitches to improve noise characteristics and dose efficiency

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Hsieh, Jiang; Nilsen, Roy A.

    2006-03-01

    A three-dimensional weighted cone beam filtered backprojection (CB-FBP) algorithm (namely 3D weighted CB-FBP algorithm) has been proposed to reconstruct images from the projection data acquired along a helical trajectory in angular ranges up to [0, 2π]. However, an over scan is usually employed in the clinic to provide premium image qualities for an accurate diagnosis at the most challenging anatomic structures, such as head, spine and extremities. In an over scan, the corresponding normalized helical pitch is usually smaller than 1:1, under which the projection data acquired along angular range larger than [0, 2π] can be utilized to reconstruct an image. To improve noise characteristics or dose efficiency in an over scan, we extended the 3D weighted CB-FBP algorithm to handle helical pitches that are smaller than 1:1, while the algorithm's other advantages, such as reconstruction accuracy and computational efficiency, are maintained. The novelty of the extended 3D weighted CB-FBP algorithm is the decomposition of an over scan with an angular range corresponding to [0, 2π + Δβ] (0 < Δβ < 2π) into a union of full scans with an angular range corresponding to [0, 2π]. As a result, the extended 3D weighted function is a weighted sum of all 3D weighting functions corresponding to each overlapped full scan. An experimental evaluation shows that, the extended 3D weighted CB-FBP algorithm can significantly improve noise characteristics or dose efficiency of the 3D weighted CB-FBP algorithm at helical pitch smaller than 1:1, while its reconstruction accuracy and computational efficiency are maintained. It is imortant to indicate that, the extended 3D weighting function is still applied on projection data before 3D backporjection, resulting in the computational efficiency of the extended 3D weighted CB-FBP algorithm comparable to that of the 3D weighted CB-FBP algorithm. It is believed that, such an efficient CB reconstruction algorithm that can provide premium

  9. Bladder dose accumulation based on a biomechanical deformable image registration algorithm in volumetric modulated arc therapy for prostate cancer.

    PubMed

    Andersen, E S; Muren, L P; Sørensen, T S; Noe, K O; Thor, M; Petersen, J B; Høyer, M; Bentzen, L; Tanderup, K

    2012-11-01

    Variations in bladder position, shape and volume cause uncertainties in the doses delivered to this organ during a course of radiotherapy for pelvic tumors. The purpose of this study was to evaluate the potential of dose accumulation based on repeat imaging and deformable image registration (DIR) to improve the accuracy of bladder dose assessment. For each of nine prostate cancer patients, the initial treatment plan was re-calculated on eight to nine repeat computed tomography (CT) scans. The planned bladder dose-volume histogram (DVH) parameters were compared to corresponding parameters derived from DIR-based accumulations as well as DVH summation based on dose re-calculations. It was found that the deviations between the DIR-based accumulations and the planned treatment were substantial and ranged (-0.5-2.3) Gy and (-9.4-13.5) Gy for D(2%) and D(mean), respectively, whereas the deviations between DIR-based accumulations and DVH summation were small and well within 1 Gy. For the investigated treatment scenario, DIR-based bladder dose accumulation did not result in substantial improvement of dose estimation as compared to the straightforward DVH summation. Large variations were found in individual patients between the doses from the initial treatment plan and the accumulated bladder doses. Hence, the use of repeat imaging has a potential for improved accuracy in treatment dose reporting.

  10. Bladder dose accumulation based on a biomechanical deformable image registration algorithm in volumetric modulated arc therapy for prostate cancer

    NASA Astrophysics Data System (ADS)

    Andersen, E. S.; Muren, L. P.; Sørensen, T. S.; Noe, K. Ø.; Thor, M.; Petersen, J. B.; Høyer, M.; Bentzen, L.; Tanderup, K.

    2012-11-01

    Variations in bladder position, shape and volume cause uncertainties in the doses delivered to this organ during a course of radiotherapy for pelvic tumors. The purpose of this study was to evaluate the potential of dose accumulation based on repeat imaging and deformable image registration (DIR) to improve the accuracy of bladder dose assessment. For each of nine prostate cancer patients, the initial treatment plan was re-calculated on eight to nine repeat computed tomography (CT) scans. The planned bladder dose-volume histogram (DVH) parameters were compared to corresponding parameters derived from DIR-based accumulations as well as DVH summation based on dose re-calculations. It was found that the deviations between the DIR-based accumulations and the planned treatment were substantial and ranged (-0.5-2.3) Gy and (-9.4-13.5) Gy for D2% and Dmean, respectively, whereas the deviations between DIR-based accumulations and DVH summation were small and well within 1 Gy. For the investigated treatment scenario, DIR-based bladder dose accumulation did not result in substantial improvement of dose estimation as compared to the straightforward DVH summation. Large variations were found in individual patients between the doses from the initial treatment plan and the accumulated bladder doses. Hence, the use of repeat imaging has a potential for improved accuracy in treatment dose reporting.

  11. Validation of calculation algorithms for organ doses in CT by measurements on a 5 year old paediatric phantom

    NASA Astrophysics Data System (ADS)

    Dabin, Jérémie; Mencarelli, Alessandra; McMillan, Dayton; Romanyukha, Anna; Struelens, Lara; Lee, Choonsik

    2016-06-01

    Many organ dose calculation tools for computed tomography (CT) scans rely on the assumptions: (1) organ doses estimated for one CT scanner can be converted into organ doses for another CT scanner using the ratio of the Computed Tomography Dose Index (CTDI) between two CT scanners; and (2) helical scans can be approximated as the summation of axial slices covering the same scan range. The current study aims to validate experimentally these two assumptions. We performed organ dose measurements in a 5 year-old physical anthropomorphic phantom for five different CT scanners from four manufacturers. Absorbed doses to 22 organs were measured using thermoluminescent dosimeters for head-to-torso scans. We then compared the measured organ doses with the values calculated from the National Cancer Institute dosimetry system for CT (NCICT) computer program, developed at the National Cancer Institute. Whereas the measured organ doses showed significant variability (coefficient of variation (CoV) up to 53% at 80 kV) across different scanner models, the CoV of organ doses normalised to CTDIvol substantially decreased (12% CoV on average at 80 kV). For most organs, the difference between measured and simulated organ doses was within  ±20% except for the bone marrow, breasts and ovaries. The discrepancies were further explained by additional Monte Carlo calculations of organ doses using a voxel phantom developed from CT images of the physical phantom. The results demonstrate that organ doses calculated for one CT scanner can be used to assess organ doses from other CT scanners with 20% uncertainty (k  =  1), for the scan settings considered in the study.

  12. Experimental validation of deterministic Acuros XB algorithm for IMRT and VMAT dose calculations with the Radiological Physics Center's head and neck phantom

    SciTech Connect

    Han Tao; Mourtada, Firas; Kisling, Kelly; Mikell, Justin; Followill, David; Howell, Rebecca

    2012-04-15

    Purpose: The purpose of this study was to verify the dosimetric performance of Acuros XB (AXB), a grid-based Boltzmann solver, in intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Methods: The Radiological Physics Center (RPC) head and neck (H and N) phantom was used for all calculations and measurements in this study. Clinically equivalent IMRT and VMAT plans were created on the RPC H and N phantom in the Eclipse treatment planning system (version 10.0) by using RPC dose prescription specifications. The dose distributions were calculated with two different algorithms, AXB 11.0.03 and anisotropic analytical algorithm (AAA) 10.0.24. Two dose report modes of AXB were recorded: dose-to-medium in medium (D{sub m,m}) and dose-to-water in medium (D{sub w,m}). Each treatment plan was delivered to the RPC phantom three times for reproducibility by using a Varian Clinac iX linear accelerator. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic registered EBT2 film, respectively. Profile comparison and 2D gamma analysis were used to quantify the agreement between the film measurements and the calculated dose distributions from both AXB and AAA. The computation times for AAA and AXB were also evaluated. Results: Good agreement was observed between measured doses and those calculated with AAA or AXB. Both AAA and AXB calculated doses within 5% of TLD measurements in both the IMRT and VMAT plans. Results of AXB{sub Dm,m} (0.1% to 3.6%) were slightly better than AAA (0.2% to 4.6%) or AXB{sub Dw,m} (0.3% to 5.1%). The gamma analysis for both AAA and AXB met the RPC 7%/4 mm criteria (over 90% passed), whereas AXB{sub Dm,m} met 5%/3 mm criteria in most cases. AAA was 2 to 3 times faster than AXB for IMRT, whereas AXB was 4-6 times faster than AAA for VMAT. Conclusions: AXB was found to be satisfactorily accurate when compared to measurements in the RPC H and N phantom. Compared with AAA

  13. Experimental validation of deterministic Acuros XB algorithm for IMRT and VMAT dose calculations with the Radiological Physics Center’s head and neck phantom

    PubMed Central

    Han, Tao; Mourtada, Firas; Kisling, Kelly; Mikell, Justin; Followill, David; Howell, Rebecca

    2012-01-01

    Purpose: The purpose of this study was to verify the dosimetric performance of Acuros XB (AXB), a grid-based Boltzmann solver, in intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Methods: The Radiological Physics Center (RPC) head and neck (H&N) phantom was used for all calculations and measurements in this study. Clinically equivalent IMRT and VMAT plans were created on the RPC H&N phantom in the Eclipse treatment planning system (version 10.0) by using RPC dose prescription specifications. The dose distributions were calculated with two different algorithms, AXB 11.0.03 and anisotropic analytical algorithm (AAA) 10.0.24. Two dose report modes of AXB were recorded: dose-to-medium in medium (Dm,m) and dose-to-water in medium (Dw,m). Each treatment plan was delivered to the RPC phantom three times for reproducibility by using a Varian Clinac iX linear accelerator. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic® EBT2 film, respectively. Profile comparison and 2D gamma analysis were used to quantify the agreement between the film measurements and the calculated dose distributions from both AXB and AAA. The computation times for AAA and AXB were also evaluated. Results: Good agreement was observed between measured doses and those calculated with AAA or AXB. Both AAA and AXB calculated doses within 5% of TLD measurements in both the IMRT and VMAT plans. Results of AXB_Dm,m (0.1% to 3.6%) were slightly better than AAA (0.2% to 4.6%) or AXB_Dw,m (0.3% to 5.1%). The gamma analysis for both AAA and AXB met the RPC 7%/4 mm criteria (over 90% passed), whereas AXB_Dm,m met 5%/3 mm criteria in most cases. AAA was 2 to 3 times faster than AXB for IMRT, whereas AXB was 4–6 times faster than AAA for VMAT. Conclusions: AXB was found to be satisfactorily accurate when compared to measurements in the RPC H&N phantom. Compared with AAA, AXB results were equal to or better than those

  14. Electron dose distributions caused by the contact-type metallic eye shield: Studies using Monte Carlo and pencil beam algorithms

    SciTech Connect

    Kang, Sei-Kwon; Yoon, Jai-Woong; Hwang, Taejin; Park, Soah; Cheong, Kwang-Ho; Jin Han, Tae; Kim, Haeyoung; Lee, Me-Yeon; Ju Kim, Kyoung Bae, Hoonsik

    2015-10-01

    A metallic contact eye shield has sometimes been used for eyelid treatment, but dose distribution has never been reported for a patient case. This study aimed to show the shield-incorporated CT-based dose distribution using the Pinnacle system and Monte Carlo (MC) calculation for 3 patient cases. For the artifact-free CT scan, an acrylic shield machined as the same size as that of the tungsten shield was used. For the MC calculation, BEAMnrc and DOSXYZnrc were used for the 6-MeV electron beam of the Varian 21EX, in which information for the tungsten, stainless steel, and aluminum material for the eye shield was used. The same plan was generated on the Pinnacle system and both were compared. The use of the acrylic shield produced clear CT images, enabling delineation of the regions of interest, and yielded CT-based dose calculation for the metallic shield. Both the MC and the Pinnacle systems showed a similar dose distribution downstream of the eye shield, reflecting the blocking effect of the metallic eye shield. The major difference between the MC and the Pinnacle results was the target eyelid dose upstream of the shield such that the Pinnacle system underestimated the dose by 19 to 28% and 11 to 18% for the maximum and the mean doses, respectively. The pattern of dose difference between the MC and the Pinnacle systems was similar to that in the previous phantom study. In conclusion, the metallic eye shield was successfully incorporated into the CT-based planning, and the accurate dose calculation requires MC simulation.

  15. SU-E-I-82: Improving CT Image Quality for Radiation Therapy Using Iterative Reconstruction Algorithms and Slightly Increasing Imaging Doses

    SciTech Connect

    Noid, G; Chen, G; Tai, A; Li, X

    2014-06-01

    Purpose: Iterative reconstruction (IR) algorithms are developed to improve CT image quality (IQ) by reducing noise without diminishing spatial resolution or contrast. For CT in radiation therapy (RT), slightly increasing imaging dose to improve IQ may be justified if it can substantially enhance structure delineation. The purpose of this study is to investigate and to quantify the IQ enhancement as a result of increasing imaging doses and using IR algorithms. Methods: CT images were acquired for phantoms, built to evaluate IQ metrics including spatial resolution, contrast and noise, with a variety of imaging protocols using a CT scanner (Definition AS Open, Siemens) installed inside a Linac room. Representative patients were scanned once the protocols were optimized. Both phantom and patient scans were reconstructed using the Sinogram Affirmed Iterative Reconstruction (SAFIRE) and the Filtered Back Projection (FBP) methods. IQ metrics of the obtained CTs were compared. Results: IR techniques are demonstrated to preserve spatial resolution as measured by the point spread function and reduce noise in comparison to traditional FBP. Driven by the reduction in noise, the contrast to noise ratio is doubled by adopting the highest SAFIRE strength. As expected, increasing imaging dose reduces noise for both SAFIRE and FBP reconstructions. The contrast to noise increases from 3 to 5 by increasing the dose by a factor of 4. Similar IQ improvement was observed on the CTs for selected patients with pancreas and prostrate cancers. Conclusion: The IR techniques produce a measurable enhancement to CT IQ by reducing the noise. Increasing imaging dose further reduces noise independent of the IR techniques. The improved CT enables more accurate delineation of tumors and/or organs at risk during RT planning and delivery guidance.

  16. SU-E-I-06: A Dose Calculation Algorithm for KV Diagnostic Imaging Beams by Empirical Modeling

    SciTech Connect

    Chacko, M; Aldoohan, S; Sonnad, J; Ahmad, S; Ali, I

    2015-06-15

    Purpose: To develop accurate three-dimensional (3D) empirical dose calculation model for kV diagnostic beams for different radiographic and CT imaging techniques. Methods: Dose was modeled using photon attenuation measured using depth dose (DD), scatter radiation of the source and medium, and off-axis ratio (OAR) profiles. Measurements were performed using single-diode in water and a diode-array detector (MapCHECK2) with kV on-board imagers (OBI) integrated with Varian TrueBeam and Trilogy linacs. The dose parameters were measured for three energies: 80, 100, and 125 kVp with and without bowtie filters using field sizes 1×1–40×40 cm2 and depths 0–20 cm in water tank. Results: The measured DD decreased with depth in water because of photon attenuation, while it increased with field size due to increased scatter radiation from medium. DD curves varied with energy and filters where they increased with higher energies and beam hardening from half-fan and full-fan bowtie filters. Scatter radiation factors increased with field sizes and higher energies. The OAR was with 3% for beam profiles within the flat dose regions. The heal effect of this kV OBI system was within 6% from the central axis value at different depths. The presence of bowtie filters attenuated measured dose off-axis by as much as 80% at the edges of large beams. The model dose predictions were verified with measured doses using single point diode and ionization chamber or two-dimensional diode-array detectors inserted in solid water phantoms. Conclusion: This empirical model enables fast and accurate 3D dose calculation in water within 5% in regions with near charge-particle equilibrium conditions outside buildup region and penumbra. It considers accurately scatter radiation contribution in water which is superior to air-kerma or CTDI dose measurements used usually in dose calculation for diagnostic imaging beams. Considering heterogeneity corrections in this model will enable patient specific dose

  17. Dosimetric accuracy and clinical quality of Acuros XB and AAA dose calculation algorithm for stereotactic and conventional lung volumetric modulated arc therapy plans

    PubMed Central

    2013-01-01

    Introduction The main aim of the current study was to assess the dosimetric accuracy and clinical quality of volumetric modulated arc therapy (VMAT) plans for stereotactic (stage I) and conventional (stage III) lung cancer treatments planned with Eclipse version 10.0 Anisotropic Analytical Algorithm (AAA) and Acuros XB (AXB) algorithm. Methods The dosimetric impact of using AAA instead of AXB, and grid size 2.5 mm instead of 1.0 mm for VMAT treatment plans was evaluated. The clinical plan quality of AXB VMAT was assessed using 45 stage I and 73 stage III patients, and was compared with published results, planned with VMAT and hybrid-VMAT techniques. Results The dosimetric impact on near-minimum PTV dose (D98%) using AAA instead of AXB was large (underdose up to 12.3%) for stage I and very small (underdose up to 0.8%) for stage III lung treatments. There were no significant differences for dose volume histogram (DVH) values between grid sizes. The calculation time was significantly higher for AXB grid size 1.0 than 2.5 mm (p < 0.01). The clinical quality of the VMAT plans was at least comparable with clinical qualities given in literature of lung treatment plans with VMAT and hybrid-VMAT techniques. The average mean lung dose (MLD), lung V20Gy and V5Gy in this study were respectively 3.6 Gy, 4.1% and 15.7% for 45 stage I patients and 12.4 Gy, 19.3% and 46.6% for 73 stage III lung patients. The average contra-lateral lung dose V5Gy-cont was 35.6% for stage III patients. Conclusions For stereotactic and conventional lung treatments, VMAT calculated with AXB grid size 2.5 mm resulted in accurate dose calculations. No hybrid technique was needed to obtain the dose constraints. AXB is recommended instead of AAA for avoiding serious overestimation of the minimum target doses compared to the actual delivered dose. PMID:23800024

  18. Development of an algorithm for feed-forward chlorine dosing of lettuce wash operations and correlation of chlorine profile with Escherichia coli O157:H7 inactivation.

    PubMed

    Zhou, Bin; Luo, Yaguang; Nou, Xiangwu; Millner, Patricia

    2014-04-01

    The dynamic interactions of chlorine and organic matter during a simulated fresh-cut produce wash process and the consequences for Escherichia coli O157:H7 inactivation were investigated. An algorithm for a chlorine feed-forward dosing scheme to maintain a stable chlorine level was further developed and validated. Organic loads with chemical oxygen demand of 300 to 800 mg/liter were modeled using iceberg lettuce. Sodium hypochlorite (NaOCl) was added to the simulated wash solution incrementally. The solution pH, free and total chlorine, and oxidation-reduction potential were monitored, and chlorination breakpoint and chloramine humps determined. The results indicated that the E. coli O157:H7 inactivation curve mirrored that of the free chlorine during the chlorine replenishment process: a slight reduction in E. coli O157:H7 was observed as the combined chlorine hump was approached, while the E. coli O157:H7 cell populations declined sharply after chlorination passed the chlorine hump and decreased to below the detection limit (<0.75 most probable number per ml) after the chlorination breakpoint was reached. While the amounts of NaOCl required for reaching the chloramine humps and chlorination breakpoints depended on the organic loads, there was a linear correlation between NaOCl input and free chlorine in the wash solution once NaOCl dosing passed the chlorination breakpoint, regardless of organic load. The data obtained were further exploited to develop a NaOCl dosing algorithm for maintaining a stable chlorine concentration in the presence of an increasing organic load. The validation tests results indicated that free chlorine could be maintained at target levels using such an algorithm, while the pH and oxidation-reduction potential were also stably maintained using this system.

  19. CTC-ask: a new algorithm for conversion of CT numbers to tissue parameters for Monte Carlo dose calculations applying DICOM RS knowledge

    NASA Astrophysics Data System (ADS)

    Ottosson, Rickard O.; Behrens, Claus F.

    2011-11-01

    One of the building blocks in Monte Carlo (MC) treatment planning is to convert patient CT data to MC compatible phantoms, consisting of density and media matrices. The resulting dose distribution is highly influenced by the accuracy of the conversion. Two major contributing factors are precise conversion of CT number to density and proper differentiation between air and lung. Existing tools do not address this issue specifically. Moreover, their density conversion may depend on the number of media used. Differentiation between air and lung is an important task in MC treatment planning and misassignment may lead to local dose errors on the order of 10%. A novel algorithm, CTC-ask, is presented in this study. It enables locally confined constraints for the media assignment and is independent of the number of media used for the conversion of CT number to density. MC compatible phantoms were generated for two clinical cases using a CT-conversion scheme implemented in both CTC-ask and the DICOM-RT toolbox. Full MC dose calculation was subsequently conducted and the resulting dose distributions were compared. The DICOM-RT toolbox inaccurately assigned lung in 9.9% and 12.2% of the voxels located outside of the lungs for the two cases studied, respectively. This was completely avoided by CTC-ask. CTC-ask is able to reduce anatomically irrational media assignment. The CTC-ask source code can be made available upon request to the authors.

  20. Development and Evaluation of a New Air Exchange Rate Algorithm for the Stochastic Human Exposure and Dose Simulation Model

    EPA Science Inventory

    between-home and between-city variability in residential pollutant infiltration. This is likely a result of differences in home ventilation, or air exchange rates (AER). The Stochastic Human Exposure and Dose Simulation (SHEDS) model is a population exposure model that uses a pro...

  1. Algorithm for x-ray beam hardening and scatter correction in low-dose cone-beam CT: phantom studies

    NASA Astrophysics Data System (ADS)

    Liu, Wenlei; Rong, Junyan; Gao, Peng; Liao, Qimei; Lu, HongBing

    2016-03-01

    X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), as well as beam hardening, resulting in image artifacts, contrast reduction, and lack of CT number accuracy. Meanwhile the x-ray radiation dose is also non-ignorable. Considerable scatter or beam hardening correction methods have been developed, independently, and rarely combined with low-dose CT reconstruction. In this paper, we combine scatter suppression with beam hardening correction for sparse-view CT reconstruction to improve CT image quality and reduce CT radiation. Firstly, scatter was measured, estimated, and removed using measurement-based methods, assuming that signal in the lead blocker shadow is only attributable to x-ray scatter. Secondly, beam hardening was modeled by estimating an equivalent attenuation coefficient at the effective energy, which was integrated into the forward projector of the algebraic reconstruction technique (ART). Finally, the compressed sensing (CS) iterative reconstruction is carried out for sparse-view CT reconstruction to reduce the CT radiation. Preliminary Monte Carlo simulated experiments indicate that with only about 25% of conventional dose, our method reduces the magnitude of cupping artifact by a factor of 6.1, increases the contrast by a factor of 1.4 and the CNR by a factor of 15. The proposed method could provide good reconstructed image from a few view projections, with effective suppression of artifacts caused by scatter and beam hardening, as well as reducing the radiation dose. With this proposed framework and modeling, it may provide a new way for low-dose CT imaging.

  2. Assessment of the dose reduction potential of a model-based iterative reconstruction algorithm using a task-based performance metrology

    SciTech Connect

    Samei, Ehsan; Richard, Samuel

    2015-01-15

    indicated a 46%–84% dose reduction potential, depending on task, without compromising the modeled detection performance. Conclusions: The presented methodology based on ACR phantom measurements extends current possibilities for the assessment of CT image quality under the complex resolution and noise characteristics exhibited with statistical and iterative reconstruction algorithms. The findings further suggest that MBIR can potentially make better use of the projections data to reduce CT dose by approximately a factor of 2. Alternatively, if the dose held unchanged, it can improve image quality by different levels for different tasks.

  3. Low kilovoltage peak (kVp) with an adaptive statistical iterative reconstruction algorithm in computed tomography urography: evaluation of image quality and radiation dose

    PubMed Central

    Zhou, Zhiguo; Chen, Haixi; Wei, Wei; Zhou, Shanghui; Xu, Jingbo; Wang, Xifu; Wang, Qingguo; Zhang, Guixiang; Zhang, Zhuoli; Zheng, Linfeng

    2016-01-01

    Purpose: The purpose of this study was to evaluate the image quality and radiation dose in computed tomography urography (CTU) images acquired with a low kilovoltage peak (kVp) in combination with an adaptive statistical iterative reconstruction (ASiR) algorithm. Methods: A total of 45 subjects (18 women, 27 men) who underwent CTU with kV assist software for automatic selection of the optimal kVp were included and divided into two groups (A and B) based on the kVp and image reconstruction algorithm: group A consisted of patients who underwent CTU with a 80 or 100 kVp and whose images were reconstructed with the 50% ASiR algorithm (n=32); group B consisted of patients who underwent CTU with a 120 kVp and whose images were reconstructed with the filtered back projection (FBP) algorithm (n=13). The images were separately reconstructed with volume rendering (VR) and maximum intensity projection (MIP). Finally, the image quality was evaluated using an image score, CT attenuation, image noise, the contrast-to-noise ratio (CNR) of the renal pelvis-to-abdominal visceral fat and the signal-to-noise ratio (SNR) of the renal pelvis. The radiation dose was assessed using volume CT dose index (CTDIvol), dose-length product (DLP) and effective dose (ED). Results: For groups A and B, the subjective image scores for the VR reconstruction images were 3.9±0.4 and 3.8±0.4, respectively, while those for the MIP reconstruction images were 3.8±0.4 and 3.6±0.6, respectively. No significant difference was found (p>0.05) between the two groups’ image scores for either the VR or MIP reconstruction images. Additionally, the inter-reviewer image scores did not significantly differ (p>0.05). The mean attenuation of the bilateral renal pelvis in group A was significantly higher than that in group B (271.4±57.6 vs. 221.8±35.3 HU, p<0.05), whereas the image noise in group A was significantly lower than that in group B (7.9±2.1 vs. 10.5±2.3 HU, p<0.05). The CNR and SNR in group A were

  4. Commissioning and Validation of the First Monte Carlo Based Dose Calculation Algorithm Commercial Treatment Planning System in Mexico

    SciTech Connect

    Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Hernandez-Bojorquez, M.; Galvan de la Cruz, O. O.; Ballesteros-Zebadua, P.

    2010-12-07

    This work presents the beam data commissioning and dose calculation validation of the first Monte Carlo (MC) based treatment planning system (TPS) installed in Mexico. According to the manufacturer specifications, the beam data commissioning needed for this model includes: several in-air and water profiles, depth dose curves, head-scatter factors and output factors (6x6, 12x12, 18x18, 24x24, 42x42, 60x60, 80x80 and 100x100 mm{sup 2}). Radiographic and radiochromic films, diode and ionization chambers were used for data acquisition. MC dose calculations in a water phantom were used to validate the MC simulations using comparisons with measured data. Gamma index criteria 2%/2 mm were used to evaluate the accuracy of MC calculations. MC calculated data show an excellent agreement for field sizes from 18x18 to 100x100 mm{sup 2}. Gamma analysis shows that in average, 95% and 100% of the data passes the gamma index criteria for these fields, respectively. For smaller fields (12x12 and 6x6 mm{sup 2}) only 92% of the data meet the criteria. Total scatter factors show a good agreement (<2.6%) between MC calculated and measured data, except for the smaller fields (12x12 and 6x6 mm{sup 2}) that show a error of 4.7%. MC dose calculations are accurate and precise for clinical treatment planning up to a field size of 18x18 mm{sup 2}. Special care must be taken for smaller fields.

  5. Commissioning and Validation of the First Monte Carlo Based Dose Calculation Algorithm Commercial Treatment Planning System in Mexico

    NASA Astrophysics Data System (ADS)

    Lárraga-Gutiérrez, J. M.; García-Garduño, O. A.; de la Cruz, O. O. Galván; Hernández-Bojórquez, M.; Ballesteros-Zebadúa, P.

    2010-12-01

    This work presents the beam data commissioning and dose calculation validation of the first Monte Carlo (MC) based treatment planning system (TPS) installed in Mexico. According to the manufacturer specifications, the beam data commissioning needed for this model includes: several in-air and water profiles, depth dose curves, head-scatter factors and output factors (6×6, 12×12, 18×18, 24×24, 42×42, 60×60, 80×80 and 100×100 mm2). Radiographic and radiochromic films, diode and ionization chambers were used for data acquisition. MC dose calculations in a water phantom were used to validate the MC simulations using comparisons with measured data. Gamma index criteria 2%/2 mm were used to evaluate the accuracy of MC calculations. MC calculated data show an excellent agreement for field sizes from 18×18 to 100×100 mm2. Gamma analysis shows that in average, 95% and 100% of the data passes the gamma index criteria for these fields, respectively. For smaller fields (12×12 and 6×6 mm2) only 92% of the data meet the criteria. Total scatter factors show a good agreement (<2.6%) between MC calculated and measured data, except for the smaller fields (12×12 and 6×6 mm2) that show a error of 4.7%. MC dose calculations are accurate and precise for clinical treatment planning up to a field size of 18×18 mm2. Special care must be taken for smaller fields.

  6. SU-E-J-85: Leave-One-Out Perturbation (LOOP) Fitting Algorithm for Absolute Dose Film Calibration

    SciTech Connect

    Chu, A; Ahmad, M; Chen, Z; Nath, R

    2014-06-01

    Purpose: To introduce an outliers-recognition fitting routine for film dosimetry. It cannot only be flexible with any linear and non-linear regression but also can provide information for the minimal number of sampling points, critical sampling distributions and evaluating analytical functions for absolute film-dose calibration. Methods: The technique, leave-one-out (LOO) cross validation, is often used for statistical analyses on model performance. We used LOO analyses with perturbed bootstrap fitting called leave-one-out perturbation (LOOP) for film-dose calibration . Given a threshold, the LOO process detects unfit points (“outliers”) compared to other cohorts, and a bootstrap fitting process follows to seek any possibilities of using perturbations for further improvement. After that outliers were reconfirmed by a traditional t-test statistics and eliminated, then another LOOP feedback resulted in the final. An over-sampled film-dose- calibration dataset was collected as a reference (dose range: 0-800cGy), and various simulated conditions for outliers and sampling distributions were derived from the reference. Comparisons over the various conditions were made, and the performance of fitting functions, polynomial and rational functions, were evaluated. Results: (1) LOOP can prove its sensitive outlier-recognition by its statistical correlation to an exceptional better goodness-of-fit as outliers being left-out. (2) With sufficient statistical information, the LOOP can correct outliers under some low-sampling conditions that other “robust fits”, e.g. Least Absolute Residuals, cannot. (3) Complete cross-validated analyses of LOOP indicate that the function of rational type demonstrates a much superior performance compared to the polynomial. Even with 5 data points including one outlier, using LOOP with rational function can restore more than a 95% value back to its reference values, while the polynomial fitting completely failed under the same conditions

  7. Development of a golden beam data set for the commissioning of a proton double-scattering system in a pencil-beam dose calculation algorithm

    SciTech Connect

    Slopsema, R. L. Flampouri, S.; Yeung, D.; Li, Z.; Lin, L.; McDonough, J. E.; Palta, J.

    2014-09-15

    Purpose: The purpose of this investigation is to determine if a single set of beam data, described by a minimal set of equations and fitting variables, can be used to commission different installations of a proton double-scattering system in a commercial pencil-beam dose calculation algorithm. Methods: The beam model parameters required to commission the pencil-beam dose calculation algorithm (virtual and effective SAD, effective source size, and pristine-peak energy spread) are determined for a commercial double-scattering system. These parameters are measured in a first room and parameterized as function of proton energy and nozzle settings by fitting four analytical equations to the measured data. The combination of these equations and fitting values constitutes the golden beam data (GBD). To determine the variation in dose delivery between installations, the same dosimetric properties are measured in two additional rooms at the same facility, as well as in a single room at another facility. The difference between the room-specific measurements and the GBD is evaluated against tolerances that guarantee the 3D dose distribution in each of the rooms matches the GBD-based dose distribution within clinically reasonable limits. The pencil-beam treatment-planning algorithm is commissioned with the GBD. The three-dimensional dose distribution in water is evaluated in the four treatment rooms and compared to the treatment-planning calculated dose distribution. Results: The virtual and effective SAD measurements fall between 226 and 257 cm. The effective source size varies between 2.4 and 6.2 cm for the large-field options, and 1.0 and 2.0 cm for the small-field options. The pristine-peak energy spread decreases from 1.05% at the lowest range to 0.6% at the highest. The virtual SAD as well as the effective source size can be accurately described by a linear relationship as function of the inverse of the residual energy. An additional linear correction term as function of

  8. Testing the GLAaS algorithm for dose measurements on low- and high-energy photon beams using an amorphous silicon portal imager

    SciTech Connect

    Nicolini, Giorgia; Fogliata, Antonella; Vanetti, Eugenio; Clivio, Alessandro; Vetterli, Daniel; Cozzi, Luca

    2008-02-15

    The GLAaS algorithm for pretreatment intensity modulation radiation therapy absolute dose verification based on the use of amorphous silicon detectors, as described in Nicolini et al. [G. Nicolini, A. Fogliata, E. Vanetti, A. Clivio, and L. Cozzi, Med. Phys. 33, 2839-2851 (2006)], was tested under a variety of experimental conditions to investigate its robustness, the possibility of using it in different clinics and its performance. GLAaS was therefore tested on a low-energy Varian Clinac (6 MV) equipped with an amorphous silicon Portal Vision PV-aS500 with electronic readout IAS2 and on a high-energy Clinac (6 and 15 MV) equipped with a PV-aS1000 and IAS3 electronics. Tests were performed for three calibration conditions: A: adding buildup on the top of the cassette such that SDD-SSD=d{sub max} and comparing measurements with corresponding doses computed at d{sub max}, B: without adding any buildup on the top of the cassette and considering only the intrinsic water-equivalent thickness of the electronic portal imaging devices device (0.8 cm), and C: without adding any buildup on the top of the cassette but comparing measurements against doses computed at d{sub max}. This procedure is similar to that usually applied when in vivo dosimetry is performed with solid state diodes without sufficient buildup material. Quantitatively, the gamma index ({gamma}), as described by Low et al. [D. A. Low, W. B. Harms, S. Mutic, and J. A. Purdy, Med. Phys. 25, 656-660 (1998)], was assessed. The {gamma} index was computed for a distance to agreement (DTA) of 3 mm. The dose difference {delta}D was considered as 2%, 3%, and 4%. As a measure of the quality of results, the fraction of field area with gamma larger than 1 (%FA) was scored. Results over a set of 50 test samples (including fields from head and neck, breast, prostate, anal canal, and brain cases) and from the long-term routine usage, demonstrated the robustness and stability of GLAaS. In general, the mean values of %FA

  9. Raman spectroscopy for the analytical quality control of low-dose break-scored tablets.

    PubMed

    Gómez, Diego A; Coello, Jordi; Maspoch, Santiago

    2016-05-30

    Quality control of solid dosage forms involves the analysis of end products according to well-defined criteria, including the assessment of the uniformity of dosage units (UDU). However, in the case of break-scored tablets, given that tablet splitting is widespread as a means to adjust doses, the uniform distribution of the active pharmaceutical ingredient (API) in all the possible fractions of the tablet must be assessed. A general procedure to accomplish with both issues, using Raman spectroscopy, is presented. It is based on the acquisition of a collection of spectra in different regions of the tablet, that later can be selected to determine the amount of API in the potential fractions that can result after splitting. The procedure has been applied to two commercial products, Sintrom 1 and Sintrom 4, with API (acenocoumarol) mass proportion of 2% and 0.7% respectively. Partial Least Squares (PLS) calibration models were constructed for the quantification of acenocoumarol in whole tablets using HPLC as a reference analytical method. Once validated, the calibration models were used to determine the API content in the different potential fragments of the scored Sintrom 4 tablets. Fragment mass measurements were also performed to estimate the range of masses of the halves and quarters that could result after tablet splitting. The results show that Raman spectroscopy can be an alternative analytical procedure to assess the uniformity of content, both in whole tablets as in its potential fragments, and that Sintrom 4 tablets can be perfectly split in halves, but some cautions have to be taken when considering the fragmentation in quarters. A practical alternative to the use of UDU test for the assessment of tablet fragments is proposed. PMID:26962721

  10. SU-E-I-89: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Pediatric Anthropomorphic and ACR Phantoms

    SciTech Connect

    Mahmood, U; Erdi, Y; Wang, W

    2014-06-01

    Purpose: To assess the impact of General Electrics automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of a pediatric anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, 80 mA, 0.7s rotation time. Image quality was assessed by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: For the baseline protocol, CNR was found to decrease from 0.460 ± 0.182 to 0.420 ± 0.057 when kVa was activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.620 ± 0.040. The liver dose decreased by 30% with kVa activation. Conclusion: Application of kVa reduces the liver dose up to 30%. However, reduction in image quality for abdominal scans occurs when using the automated tube voltage selection feature at the baseline protocol. As demonstrated by the CNR and NPS analysis, the texture and magnitude of the noise in reconstructed images at ASiR 40% was found to be the same as our baseline images. We have demonstrated that 30% dose reduction is possible when using 40% ASiR with kVa in pediatric patients.

  11. Prediction of human observer performance in a 2-alternative forced choice low-contrast detection task using channelized Hotelling observer: Impact of radiation dose and reconstruction algorithms

    SciTech Connect

    Yu Lifeng; Leng Shuai; Chen Lingyun; Kofler, James M.; McCollough, Cynthia H.; Carter, Rickey E.

    2013-04-15

    Purpose: Efficient optimization of CT protocols demands a quantitative approach to predicting human observer performance on specific tasks at various scan and reconstruction settings. The goal of this work was to investigate how well a channelized Hotelling observer (CHO) can predict human observer performance on 2-alternative forced choice (2AFC) lesion-detection tasks at various dose levels and two different reconstruction algorithms: a filtered-backprojection (FBP) and an iterative reconstruction (IR) method. Methods: A 35 Multiplication-Sign 26 cm{sup 2} torso-shaped phantom filled with water was used to simulate an average-sized patient. Three rods with different diameters (small: 3 mm; medium: 5 mm; large: 9 mm) were placed in the center region of the phantom to simulate small, medium, and large lesions. The contrast relative to background was -15 HU at 120 kV. The phantom was scanned 100 times using automatic exposure control each at 60, 120, 240, 360, and 480 quality reference mAs on a 128-slice scanner. After removing the three rods, the water phantom was again scanned 100 times to provide signal-absent background images at the exact same locations. By extracting regions of interest around the three rods and on the signal-absent images, the authors generated 21 2AFC studies. Each 2AFC study had 100 trials, with each trial consisting of a signal-present image and a signal-absent image side-by-side in randomized order. In total, 2100 trials were presented to both the model and human observers. Four medical physicists acted as human observers. For the model observer, the authors used a CHO with Gabor channels, which involves six channel passbands, five orientations, and two phases, leading to a total of 60 channels. The performance predicted by the CHO was compared with that obtained by four medical physicists at each 2AFC study. Results: The human and model observers were highly correlated at each dose level for each lesion size for both FBP and IR. The

  12. SU-E-I-81: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Adult Anthropomorphic and ACR Phantoms

    SciTech Connect

    Mahmood, U; Erdi, Y; Wang, W

    2014-06-01

    Purpose: To assess the impact of General Electrics (GE) automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of an adult anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, Auto mA (180 to 380 mA), noise index (NI) = 14, adaptive iterative statistical reconstruction (ASiR) of 20%, 0.8s rotation time. Image quality was evaluated by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: The CNR for the adult male was found to decrease from CNR = 0.912 ± 0.045 for the baseline protocol without kVa to a CNR = 0.756 ± 0.049 with kVa activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.903 ± 0.023. The difference in the central liver dose with and without kVa was found to be 0.07%. Conclusion: Dose reduction was insignificant in the adult phantom. As determined by NPS analysis, ASiR of 40% produced images with similar noise texture to the baseline protocol. However, the CNR at ASiR of 40% with kVa fails to meet the current ACR CNR passing requirement of 1.0.

  13. Warfarin pharmacogenomics: current best evidence.

    PubMed

    Kimmel, S E

    2015-06-01

    The utility of using genetic information to guide warfarin dosing has remained unclear based on prior observational studies and small clinical trials. Two larger trials of warfarin and one of the acenocoumarol and phenprocoumon have recently been published. The COAG trial addressed the incremental benefit of adding genetic information to clinical information and demonstrated no benefit from the pharmacogenetic-based dosing strategy on the primary outcome. The EU-PACT UK trial compared an algorithm approach using genetic and clinical information to one that used a relatively fixed starting dose. The pharmacogenetic-based algorithms improved the primary outcome. The study of acenocoumarol and phenprocoumon compared a pharmacogenetic with a clinical algorithm and demonstrated no benefit on the primary outcome. The evidence to date does not support an incremental benefit of adding genetic information to clinical information on anticoagulation control. However, compared with fixed dosing, a pharmacogenetic algorithm can improve anticoagulation control.

  14. Validation of a method for in vivo 3D dose reconstruction for IMRT and VMAT treatments using on-treatment EPID images and a model-based forward-calculation algorithm

    SciTech Connect

    Van Uytven, Eric Van Beek, Timothy; McCowan, Peter M.; Chytyk-Praznik, Krista; Greer, Peter B.; McCurdy, Boyd M. C.

    2015-12-15

    Purpose: Radiation treatments are trending toward delivering higher doses per fraction under stereotactic radiosurgery and hypofractionated treatment regimens. There is a need for accurate 3D in vivo patient dose verification using electronic portal imaging device (EPID) measurements. This work presents a model-based technique to compute full three-dimensional patient dose reconstructed from on-treatment EPID portal images (i.e., transmission images). Methods: EPID dose is converted to incident fluence entering the patient using a series of steps which include converting measured EPID dose to fluence at the detector plane and then back-projecting the primary source component of the EPID fluence upstream of the patient. Incident fluence is then recombined with predicted extra-focal fluence and used to calculate 3D patient dose via a collapsed-cone convolution method. This method is implemented in an iterative manner, although in practice it provides accurate results in a single iteration. The robustness of the dose reconstruction technique is demonstrated with several simple slab phantom and nine anthropomorphic phantom cases. Prostate, head and neck, and lung treatments are all included as well as a range of delivery techniques including VMAT and dynamic intensity modulated radiation therapy (IMRT). Results: Results indicate that the patient dose reconstruction algorithm compares well with treatment planning system computed doses for controlled test situations. For simple phantom and square field tests, agreement was excellent with a 2%/2 mm 3D chi pass rate ≥98.9%. On anthropomorphic phantoms, the 2%/2 mm 3D chi pass rates ranged from 79.9% to 99.9% in the planning target volume (PTV) region and 96.5% to 100% in the low dose region (>20% of prescription, excluding PTV and skin build-up region). Conclusions: An algorithm to reconstruct delivered patient 3D doses from EPID exit dosimetry measurements was presented. The method was applied to phantom and patient

  15. SU-E-T-579: On the Relative Sensitivity of Monte Carlo and Pencil Beam Dose Calculation Algorithms to CT Metal Artifacts in Volumetric-Modulated Arc Spine Radiosurgery (RS)

    SciTech Connect

    Wong, M; Lee, V; Leung, R; Lee, K; Law, G; Tung, S; Chan, M; Blanck, O

    2015-06-15

    Purpose: Investigating the relative sensitivity of Monte Carlo (MC) and Pencil Beam (PB) dose calculation algorithms to low-Z (titanium) metallic artifacts is important for accurate and consistent dose reporting in post¬operative spinal RS. Methods: Sensitivity analysis of MC and PB dose calculation algorithms on the Monaco v.3.3 treatment planning system (Elekta CMS, Maryland Heights, MO, USA) was performed using CT images reconstructed without (plain) and with Orthopedic Metal Artifact Reduction (OMAR; Philips Healthcare system, Cleveland, OH, USA). 6MV and 10MV volumetric-modulated arc (VMAT) RS plans were obtained for MC and PB on the plain and OMAR images (MC-plain/OMAR and PB-plain/OMAR). Results: Maximum differences in dose to 0.2cc (D0.2cc) of spinal cord and cord +2mm for 6MV and 10MV VMAT plans were 0.1Gy between MC-OMAR and MC-plain, and between PB-OMAR and PB-plain. Planning target volume (PTV) dose coverage changed by 0.1±0.7% and 0.2±0.3% for 6MV and 10MV from MC-OMAR to MC-plain, and by 0.1±0.1% for both 6MV and 10 MV from PB-OMAR to PB-plain, respectively. In no case for both MC and PB the D0.2cc to spinal cord was found to exceed the planned tolerance changing from OMAR to plain CT in dose calculations. Conclusion: Dosimetric impacts of metallic artifacts caused by low-Z metallic spinal hardware (mainly titanium alloy) are not clinically important in VMAT-based spine RS, without significant dependence on dose calculation methods (MC and PB) and photon energy ≥ 6MV. There is no need to use one algorithm instead of the other to reduce uncertainty for dose reporting. The dose calculation method that should be used in spine RS shall be consistent with the usual clinical practice.

  16. Population Pharmacokinetics of Busulfan in Pediatric and Young Adult Patients Undergoing Hematopoietic Cell Transplant: A Model-Based Dosing Algorithm for Personalized Therapy and Implementation into Routine Clinical Use

    PubMed Central

    Long-Boyle, Janel; Savic, Rada; Yan, Shirley; Bartelink, Imke; Musick, Lisa; French, Deborah; Law, Jason; Horn, Biljana; Cowan, Morton J.; Dvorak, Christopher C.

    2014-01-01

    Background Population pharmacokinetic (PK) studies of busulfan in children have shown that individualized model-based algorithms provide improved targeted busulfan therapy when compared to conventional dosing. The adoption of population PK models into routine clinical practice has been hampered by the tendency of pharmacologists to develop complex models too impractical for clinicians to use. The authors aimed to develop a population PK model for busulfan in children that can reliably achieve therapeutic exposure (concentration-at-steady-state, Css) and implement a simple, model-based tool for the initial dosing of busulfan in children undergoing HCT. Patients and Methods Model development was conducted using retrospective data available in 90 pediatric and young adult patients who had undergone HCT with busulfan conditioning. Busulfan drug levels and potential covariates influencing drug exposure were analyzed using the non-linear mixed effects modeling software, NONMEM. The final population PK model was implemented into a clinician-friendly, Microsoft Excel-based tool and used to recommend initial doses of busulfan in a group of 21 pediatric patients prospectively dosed based on the population PK model. Results Modeling of busulfan time-concentration data indicates busulfan CL displays non-linearity in children, decreasing up to approximately 20% between the concentrations of 250–2000 ng/mL. Important patient-specific covariates found to significantly impact busulfan CL were actual body weight and age. The percentage of individuals achieving a therapeutic Css was significantly higher in subjects receiving initial doses based on the population PK model (81%) versus historical controls dosed on conventional guidelines (52%) (p = 0.02). Conclusion When compared to the conventional dosing guidelines, the model-based algorithm demonstrates significant improvement for providing targeted busulfan therapy in children and young adults. PMID:25162216

  17. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  18. SU-F-BRD-15: The Impact of Dose Calculation Algorithm and Hounsfield Units Conversion Tables On Plan Dosimetry for Lung SBRT

    SciTech Connect

    Kuo, L; Yorke, E; Lim, S; Mechalakos, J; Rimner, A

    2014-06-15

    Purpose: To assess dosimetric differences in IMRT lung stereotactic body radiotherapy (SBRT) plans calculated with Varian AAA and Acuros (AXB) and with vendor-supplied (V) versus in-house (IH) measured Hounsfield units (HU) to mass and HU to electron density conversion tables. Methods: In-house conversion tables were measured using Gammex 472 density-plug phantom. IMRT plans (6 MV, Varian TrueBeam, 6–9 coplanar fields) meeting departmental coverage and normal tissue constraints were retrospectively generated for 10 lung SBRT cases using Eclipse Vn 10.0.28 AAA with in-house tables (AAA/IH). Using these monitor units and MLC sequences, plans were recalculated with AAA and vendor tables (AAA/V) and with AXB with both tables (AXB/IH and AXB/V). Ratios to corresponding AAA/IH values were calculated for PTV D95, D01, D99, mean-dose, total and ipsilateral lung V20 and chestwall V30. Statistical significance of differences was judged by Wilcoxon Signed Rank Test (p<0.05). Results: For HU<−400 the vendor HU-mass density table was notably below the IH table. PTV D95 ratios to AAA/IH, averaged over all patients, are 0.963±0.073 (p=0.508), 0.914±0.126 (p=0.011), and 0.998±0.001 (p=0.005) for AXB/IH, AXB/V and AAA/V respectively. Total lung V20 ratios are 1.006±0.046 (p=0.386), 0.975±0.080 (p=0.514) and 0.998±0.002 (p=0.007); ipsilateral lung V20 ratios are 1.008±0.041(p=0.284), 0.977±0.076 (p=0.443), and 0.998±0.018 (p=0.005) for AXB/IH, AXB/V and AAA/V respectively. In 7 cases, ratios to AAA/IH were within ± 5% for all indices studied. For 3 cases characterized by very low lung density and small PTV (19.99±8.09 c.c.), PTV D95 ratio for AXB/V ranged from 67.4% to 85.9%, AXB/IH D95 ratio ranged from 81.6% to 93.4%; there were large differences in other studied indices. Conclusion: For AXB users, careful attention to HU conversion tables is important, as they can significantly impact AXB (but not AAA) lung SBRT plans. Algorithm selection is also important for

  19. Stereotactic Body Radiotherapy for Primary Lung Cancer at a Dose of 50 Gy Total in Five Fractions to the Periphery of the Planning Target Volume Calculated Using a Superposition Algorithm

    SciTech Connect

    Takeda, Atsuya; Sanuki, Naoko; Kunieda, Etsuo Ohashi, Toshio; Oku, Yohei; Takeda, Toshiaki; Shigematsu, Naoyuki; Kubo, Atsushi

    2009-02-01

    Purpose: To retrospectively analyze the clinical outcomes of stereotactic body radiotherapy (SBRT) for patients with Stages 1A and 1B non-small-cell lung cancer. Methods and Materials: We reviewed the records of patients with non-small-cell lung cancer treated with curative intent between Dec 2001 and May 2007. All patients had histopathologically or cytologically confirmed disease, increased levels of tumor markers, and/or positive findings on fluorodeoxyglucose positron emission tomography. Staging studies identified their disease as Stage 1A or 1B. Performance status was 2 or less according to World Health Organization guidelines in all cases. The prescribed dose of 50 Gy total in five fractions, calculated by using a superposition algorithm, was defined for the periphery of the planning target volume. Results: One hundred twenty-one patients underwent SBRT during the study period, and 63 were eligible for this analysis. Thirty-eight patients had Stage 1A (T1N0M0) and 25 had Stage 1B (T2N0M0). Forty-nine patients were not appropriate candidates for surgery because of chronic pulmonary disease. Median follow-up of these 49 patients was 31 months (range, 10-72 months). The 3-year local control, disease-free, and overall survival rates in patients with Stages 1A and 1B were 93% and 96% (p = 0.86), 76% and 77% (p = 0.83), and 90% and 63% (p = 0.09), respectively. No acute toxicity was observed. Grade 2 or higher radiation pneumonitis was experienced by 3 patients, and 1 of them had fatal bacterial pneumonia. Conclusions: The SBRT at 50 Gy total in five fractions to the periphery of the planning target volume calculated by using a superposition algorithm is feasible. High local control rates were achieved for both T2 and T1 tumors.

  20. MO-A-BRD-09: A Data-Mining Algorithm for Large Scale Analysis of Dose-Outcome Relationships in a Database of Irradiated Head-And-Neck (HN) Cancer Patients

    SciTech Connect

    Robertson, SP; Quon, H; Kiess, AP; Moore, JA; Yang, W; Cheng, Z; Sharabi, A; McNutt, TR

    2014-06-15

    Purpose: To develop a framework for automatic extraction of clinically meaningful dosimetric-outcome relationships from an in-house, analytic oncology database. Methods: Dose-volume histograms (DVH) and clinical outcome-related structured data elements have been routinely stored to our database for 513 HN cancer patients treated from 2007 to 2014. SQL queries were developed to extract outcomes that had been assessed for at least 100 patients, as well as DVH curves for organs-at-risk (OAR) that were contoured for at least 100 patients. DVH curves for paired OAR (e.g., left and right parotids) were automatically combined and included as additional structures for analysis. For each OAR-outcome combination, DVH dose points, D(V{sub t}), at a series of normalized volume thresholds, V{sub t}=[0.01,0.99], were stratified into two groups based on outcomes after treatment completion. The probability, P[D(V{sub t})], of an outcome was modeled at each V{sub t} by logistic regression. Notable combinations, defined as having P[D(V{sub t})] increase by at least 5% per Gy (p<0.05), were further evaluated for clinical relevance using a custom graphical interface. Results: A total of 57 individual and combined structures and 115 outcomes were queried, resulting in over 6,500 combinations for analysis. Of these, 528 combinations met the 5%/Gy requirement, with further manual inspection revealing a number of reasonable models based on either reported literature or proximity between neighboring OAR. The data mining algorithm confirmed the following well-known toxicity/outcome relationships: dysphagia/larynx, voice changes/larynx, esophagitis/esophagus, xerostomia/combined parotids, and mucositis/oral mucosa. Other notable relationships included dysphagia/pharyngeal constrictors, nausea/brainstem, nausea/spinal cord, weight-loss/mandible, and weight-loss/combined parotids. Conclusion: Our database platform has enabled large-scale analysis of dose-outcome relationships. The current data

  1. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  2. A dose error evaluation study for 4D dose calculations

    NASA Astrophysics Data System (ADS)

    Milz, Stefan; Wilkens, Jan J.; Ullrich, Wolfgang

    2014-10-01

    Previous studies have shown that respiration induced motion is not negligible for Stereotactic Body Radiation Therapy. The intrafractional breathing induced motion influences the delivered dose distribution on the underlying patient geometry such as the lung or the abdomen. If a static geometry is used, a planning process for these indications does not represent the entire dynamic process. The quality of a full 4D dose calculation approach depends on the dose coordinate transformation process between deformable geometries. This article provides an evaluation study that introduces an advanced method to verify the quality of numerical dose transformation generated by four different algorithms. The used transformation metric value is based on the deviation of the dose mass histogram (DMH) and the mean dose throughout dose transformation. The study compares the results of four algorithms. In general, two elementary approaches are used: dose mapping and energy transformation. Dose interpolation (DIM) and an advanced concept, so called divergent dose mapping model (dDMM), are used for dose mapping. The algorithms are compared to the basic energy transformation model (bETM) and the energy mass congruent mapping (EMCM). For evaluation 900 small sample regions of interest (ROI) are generated inside an exemplary lung geometry (4DCT). A homogeneous fluence distribution is assumed for dose calculation inside the ROIs. The dose transformations are performed with the four different algorithms. The study investigates the DMH-metric and the mean dose metric for different scenarios (voxel sizes: 8 mm, 4 mm, 2 mm, 1 mm 9 different breathing phases). dDMM achieves the best transformation accuracy in all measured test cases with 3-5% lower errors than the other models. The results of dDMM are reasonable and most efficient in this study, although the model is simple and easy to implement. The EMCM model also achieved suitable results, but the approach requires a more complex

  3. Modulation of insulin dose titration using a hypoglycaemia-sensitive algorithm: insulin glargine versus neutral protamine Hagedorn insulin in insulin-naïve people with type 2 diabetes

    PubMed Central

    Home, P D; Bolli, G B; Mathieu, C; Deerochanawong, C; Landgraf, W; Candelas, C; Pilorget, V; Dain, M-P; Riddle, M C

    2015-01-01

    Aims To examine whether insulin glargine can lead to better control of glycated haemoglobin (HbA1c) than that achieved by neutral protamine Hagedorn (NPH) insulin, using a protocol designed to limit nocturnal hypoglycaemia. Methods The present study, the Least One Oral Antidiabetic Drug Treatment (LANCELOT) Study, was a 36-week, randomized, open-label, parallel-arm study conducted in Europe, Asia, the Middle East and South America. Participants were randomized (1 : 1) to begin glargine or NPH, on background of metformin with glimepiride. Weekly insulin titration aimed to achieve median prebreakfast and nocturnal plasma glucose levels ≤5.5 mmol/l, while limiting values ≤4.4 mmol/l. Results The efficacy population (n = 701) had a mean age of 57 years, a mean body mass index of 29.8 kg/m2, a mean duration of diabetes of 9.2 years and a mean HbA1c level of 8.2% (66 mmol/mol). At treatment end, HbA1c values and the proportion of participants with HbA1c <7.0 % (<53 mmol/mol) were not significantly different for glargine [7.1 % (54 mmol/mol) and 50.3%] versus NPH [7.2 % (55 mmol/mol) and 44.3%]. The rate of symptomatic nocturnal hypoglycaemia, confirmed by plasma glucose ≤3.9 or ≤3.1 mmol/l, was 29 and 48% less with glargine than with NPH insulin. Other outcomes were similar between the groups. Conclusion Insulin glargine was not superior to NPH insulin in improving glycaemic control. The insulin dosing algorithm was not sufficient to equalize nocturnal hypoglycaemia between the two insulins. This study confirms, in a globally heterogeneous population, the reduction achieved in nocturnal hypoglycaemia while attaining good glycaemic control with insulin glargine compared with NPH, even when titrating basal insulin to prevent nocturnal hypoglycaemia rather than treating according to normal fasting glucose levels. PMID:24957785

  4. Dose reconstruction for intensity-modulated radiation therapy using a non-iterative method and portal dose image

    NASA Astrophysics Data System (ADS)

    Yeo, Inhwan Jason; Jung, Jae Won; Chew, Meng; Kim, Jong Oh; Wang, Brian; Di Biase, Steven; Zhu, Yunping; Lee, Dohyung

    2009-09-01

    A straightforward and accurate method was developed to verify the delivery of intensity-modulated radiation therapy (IMRT) and to reconstruct the dose in a patient. The method is based on a computational algorithm that linearly describes the physical relationship between beamlets and dose-scoring voxels in a patient and the dose image from an electronic portal imaging device (EPID). The relationship is expressed in the form of dose response functions (responses) that are quantified using Monte Carlo (MC) particle transport techniques. From the dose information measured by the EPID the received patient dose is reconstructed by inversely solving the algorithm. The unique and novel non-iterative feature of this algorithm sets it apart from many existing dose reconstruction methods in the literature. This study presents the algorithm in detail and validates it experimentally for open and IMRT fields. Responses were first calculated for each beamlet of the selected fields by MC simulation. In-phantom and exit film dosimetry were performed on a flat phantom. Using the calculated responses and the algorithm, the exit film dose was used to inversely reconstruct the in-phantom dose, which was then compared with the measured in-phantom dose. The dose comparison in the phantom for all irradiated fields showed a pass rate of higher than 90% dose points given the criteria of dose difference of 3% and distance to agreement of 3 mm.

  5. A Simple Low-dose X-ray CT Simulation from High-dose Scan

    PubMed Central

    Zeng, Dong; Huang, Jing; Bian, Zhaoying; Niu, Shanzhou; Zhang, Hua; Feng, Qianjin; Liang, Zhengrong

    2015-01-01

    Low-dose X-ray computed tomography (CT) simulation from high-dose scan is required in optimizing radiation dose to patients. In this study, we propose a simple low-dose CT simulation strategy in sinogram domain using the raw data from high-dose scan. Specially, a relationship between the incident fluxes of low- and high- dose scans is first determined according to the repeated projection measurements and analysis. Second, the incident flux level of the simulated low-dose scan is generated by properly scaling the incident flux level of high-dose scan via the determined relationship in the first step. Third, the low-dose CT transmission data by energy integrating detection is simulated by adding a statistically independent Poisson noise distribution plus a statistically independent Gaussian noise distribution. Finally, a filtered back-projection (FBP) algorithm is implemented to reconstruct the resultant low-dose CT images. The present low-dose simulation strategy is verified on the simulations and real scans by comparing it with the existing low-dose CT simulation tool. Experimental results demonstrated that the present low-dose CT simulation strategy can generate accurate low-dose CT sinogram data from high-dose scan in terms of qualitative and quantitative measurements. PMID:26543245

  6. Optimization of the double dosimetry algorithm for interventional cardiologists

    NASA Astrophysics Data System (ADS)

    Chumak, Vadim; Morgun, Artem; Bakhanova, Elena; Voloskiy, Vitalii; Borodynchik, Elena

    2014-11-01

    A double dosimetry method is recommended in interventional cardiology (IC) to assess occupational exposure; yet currently there is no common and universal algorithm for effective dose estimation. In this work, flexible and adaptive algorithm building methodology was developed and some specific algorithm applicable for typical irradiation conditions of IC procedures was obtained. It was shown that the obtained algorithm agrees well with experimental measurements and is less conservative compared to other known algorithms.

  7. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  8. Calculation of the biological effective dose for piecewise defined dose-rate fits

    SciTech Connect

    Hobbs, Robert F.; Sgouros, George

    2009-03-15

    An algorithmic solution to the biological effective dose (BED) calculation from the Lea-Catcheside formula for a piecewise defined function is presented. Data from patients treated for metastatic thyroid cancer were used to illustrate the solution. The Lea-Catcheside formula for the G-factor of the BED is integrated numerically using a large number of small trapezoidal fits to each integral. The algorithmically calculated BED is compatible with an analytic calculation for a similarly valued exponentially fitted dose-rate plot and is the only resolution for piecewise defined dose-rate functions.

  9. Ultra Low Dose CT Pulmonary Angiography with Iterative Reconstruction

    PubMed Central

    Koehler, Thomas; Fingerle, Alexander A.; Brendel, Bernhard; Richter, Vivien; Rasper, Michael; Rummeny, Ernst J.; Noël, Peter B.; Münzel, Daniela

    2016-01-01

    Objective Evaluation of a new iterative reconstruction algorithm (IMR) for detection/rule-out of pulmonary embolism (PE) in ultra-low dose computed tomography pulmonary angiography (CTPA). Methods Lower dose CT data sets were simulated based on CTPA examinations of 16 patients with pulmonary embolism (PE) with dose levels (DL) of 50%, 25%, 12.5%, 6.3% or 3.1% of the original tube current setting. Original CT data sets and simulated low-dose data sets were reconstructed with three reconstruction algorithms: the standard reconstruction algorithm “filtered back projection” (FBP), the first generation iterative reconstruction algorithm iDose and the next generation iterative reconstruction algorithm “Iterative Model Reconstruction” (IMR). In total, 288 CTPA data sets (16 patients, 6 tube current levels, 3 different algorithms) were evaluated by two blinded radiologists regarding image quality, diagnostic confidence, detectability of PE and contrast-to-noise ratio (CNR). Results iDose and IMR showed better detectability of PE than FBP. With IMR, sensitivity for detection of PE was 100% down to a dose level of 12.5%. iDose and IMR showed superiority to FBP regarding all characteristics of subjective (diagnostic confidence in detection of PE, image quality, image noise, artefacts) and objective image quality. The minimum DL providing acceptable diagnostic performance was 12.5% (= 0.45 mSv) for IMR, 25% (= 0.89 mSv) for iDose and 100% (= 3.57 mSv) for FBP. CNR was significantly (p < 0.001) improved by IMR compared to FBP and iDose at all dose levels. Conclusion By using IMR for detection of PE, dose reduction for CTPA of up to 75% is possible while maintaining full diagnostic confidence. This would result in a mean effective dose of approximately 0.9 mSv for CTPA. PMID:27611830

  10. Low-dose computed tomography image restoration using previous normal-dose scan

    SciTech Connect

    Ma, Jianhua; Huang, Jing; Feng, Qianjin; Zhang, Hua; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2011-10-15

    Purpose: In current computed tomography (CT) examinations, the associated x-ray radiation dose is of a significant concern to patients and operators. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) or kVp parameter (or delivering less x-ray energy to the body) as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and the noise would propagate into the CT image if no adequate noise control is applied during image reconstruction. Since a normal-dose high diagnostic CT image scanned previously may be available in some clinical applications, such as CT perfusion imaging and CT angiography (CTA), this paper presents an innovative way to utilize the normal-dose scan as a priori information to induce signal restoration of the current low-dose CT image series. Methods: Unlike conventional local operations on neighboring image voxels, nonlocal means (NLM) algorithm utilizes the redundancy of information across the whole image. This paper adapts the NLM to utilize the redundancy of information in the previous normal-dose scan and further exploits ways to optimize the nonlocal weights for low-dose image restoration in the NLM framework. The resulting algorithm is called the previous normal-dose scan induced nonlocal means (ndiNLM). Because of the optimized nature of nonlocal weights calculation, the ndiNLM algorithm does not depend heavily on image registration between the current low-dose and the previous normal-dose CT scans. Furthermore, the smoothing parameter involved in the ndiNLM algorithm can be adaptively estimated based on the image noise relationship between the current low-dose and the previous normal-dose scanning protocols. Results: Qualitative and quantitative evaluations were carried out on a physical phantom as well as clinical abdominal and brain perfusion CT scans in terms of accuracy and resolution properties. The gain by the use

  11. Simple benchmark for complex dose finding studies.

    PubMed

    Cheung, Ying Kuen

    2014-06-01

    While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method's simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O'Quigley et al. (2002, Biostatistics 3, 51-56), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.

  12. SU-E-T-280: Reconstructed Rectal Wall Dose Map-Based Verification of Rectal Dose Sparing Effect According to Rectum Definition Methods and Dose Perturbation by Air Cavity in Endo-Rectal Balloon

    SciTech Connect

    Park, J; Park, H; Lee, J; Kang, S; Lee, M; Suh, T; Lee, B

    2014-06-01

    Purpose: Dosimetric effect and discrepancy according to the rectum definition methods and dose perturbation by air cavity in an endo-rectal balloon (ERB) were verified using rectal-wall (Rwall) dose maps considering systematic errors in dose optimization and calculation accuracy in intensity-modulated radiation treatment (IMRT) for prostate cancer patients. Methods: When the inflated ERB having average diameter of 4.5 cm and air volume of 100 cc is used for patient, Rwall doses were predicted by pencil-beam convolution (PBC), anisotropic analytic algorithm (AAA), and AcurosXB (AXB) with material assignment function. The errors of dose optimization and calculation by separating air cavity from the whole rectum (Rwhole) were verified with measured rectal doses. The Rwall doses affected by the dose perturbation of air cavity were evaluated using a featured rectal phantom allowing insert of rolled-up gafchromic films and glass rod detectors placed along the rectum perimeter. Inner and outer Rwall doses were verified with reconstructed predicted rectal wall dose maps. Dose errors and extent at dose levels were evaluated with estimated rectal toxicity. Results: While AXB showed insignificant difference of target dose coverage, Rwall doses underestimated by up to 20% in dose optimization for the Rwhole than Rwall at all dose range except for the maximum dose. As dose optimization for Rwall was applied, the Rwall doses presented dose error less than 3% between dose calculation algorithm except for overestimation of maximum rectal dose up to 5% in PBC. Dose optimization for Rwhole caused dose difference of Rwall especially at intermediate doses. Conclusion: Dose optimization for Rwall could be suggested for more accurate prediction of rectal wall dose prediction and dose perturbation effect by air cavity in IMRT for prostate cancer. This research was supported by the Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea

  13. Direct dose mapping versus energy/mass transfer mapping for 4D dose accumulation: fundamental differences and dosimetric consequences

    NASA Astrophysics Data System (ADS)

    Li, Haisen S.; Zhong, Hualiang; Kim, Jinkoo; Glide-Hurst, Carri; Gulam, Misbah; Nurushev, Teamour S.; Chetty, Indrin J.

    2014-01-01

    The direct dose mapping (DDM) and energy/mass transfer (EMT) mapping are two essential algorithms for accumulating the dose from different anatomic phases to the reference phase when there is organ motion or tumor/tissue deformation during the delivery of radiation therapy. DDM is based on interpolation of the dose values from one dose grid to another and thus lacks rigor in defining the dose when there are multiple dose values mapped to one dose voxel in the reference phase due to tissue/tumor deformation. On the other hand, EMT counts the total energy and mass transferred to each voxel in the reference phase and calculates the dose by dividing the energy by mass. Therefore it is based on fundamentally sound physics principles. In this study, we implemented the two algorithms and integrated them within the Eclipse treatment planning system. We then compared the clinical dosimetric difference between the two algorithms for ten lung cancer patients receiving stereotactic radiosurgery treatment, by accumulating the delivered dose to the end-of-exhale (EE) phase. Specifically, the respiratory period was divided into ten phases and the dose to each phase was calculated and mapped to the EE phase and then accumulated. The displacement vector field generated by Demons-based registration of the source and reference images was used to transfer the dose and energy. The DDM and EMT algorithms produced noticeably different cumulative dose in the regions with sharp mass density variations and/or high dose gradients. For the planning target volume (PTV) and internal target volume (ITV) minimum dose, the difference was up to 11% and 4% respectively. This suggests that DDM might not be adequate for obtaining an accurate dose distribution of the cumulative plan, instead, EMT should be considered.

  14. Absorbed Dose and Dose Equivalent Calculations for Modeling Effective Dose

    NASA Technical Reports Server (NTRS)

    Welton, Andrew; Lee, Kerry

    2010-01-01

    While in orbit, Astronauts are exposed to a much higher dose of ionizing radiation than when on the ground. It is important to model how shielding designs on spacecraft reduce radiation effective dose pre-flight, and determine whether or not a danger to humans is presented. However, in order to calculate effective dose, dose equivalent calculations are needed. Dose equivalent takes into account an absorbed dose of radiation and the biological effectiveness of ionizing radiation. This is important in preventing long-term, stochastic radiation effects in humans spending time in space. Monte carlo simulations run with the particle transport code FLUKA, give absorbed and equivalent dose data for relevant shielding. The shielding geometry used in the dose calculations is a layered slab design, consisting of aluminum, polyethylene, and water. Water is used to simulate the soft tissues that compose the human body. The results obtained will provide information on how the shielding performs with many thicknesses of each material in the slab. This allows them to be directly applicable to modern spacecraft shielding geometries.

  15. Optimizing CT radiation dose based on patient size and image quality: the size-specific dose estimate method.

    PubMed

    Larson, David B

    2014-10-01

    The principle of ALARA (dose as low as reasonably achievable) calls for dose optimization rather than dose reduction, per se. Optimization of CT radiation dose is accomplished by producing images of acceptable diagnostic image quality using the lowest dose method available. Because it is image quality that constrains the dose, CT dose optimization is primarily a problem of image quality rather than radiation dose. Therefore, the primary focus in CT radiation dose optimization should be on image quality. However, no reliable direct measure of image quality has been developed for routine clinical practice. Until such measures become available, size-specific dose estimates (SSDE) can be used as a reasonable image-quality estimate. The SSDE method of radiation dose optimization for CT abdomen and pelvis consists of plotting SSDE for a sample of examinations as a function of patient size, establishing an SSDE threshold curve based on radiologists' assessment of image quality, and modifying protocols to consistently produce doses that are slightly above the threshold SSDE curve. Challenges in operationalizing CT radiation dose optimization include data gathering and monitoring, managing the complexities of the numerous protocols, scanners and operators, and understanding the relationship of the automated tube current modulation (ATCM) parameters to image quality. Because CT manufacturers currently maintain their ATCM algorithms as secret for proprietary reasons, prospective modeling of SSDE for patient populations is not possible without reverse engineering the ATCM algorithm and, hence, optimization by this method requires a trial-and-error approach.

  16. SCCT guidelines on radiation dose and dose-optimization strategies in cardiovascular CT

    PubMed Central

    Halliburton, Sandra S.; Abbara, Suhny; Chen, Marcus Y.; Gentry, Ralph; Mahesh, Mahadevappa; Raff, Gilbert L.; Shaw, Leslee J.; Hausleiter, Jörg

    2012-01-01

    Over the last few years, computed tomography (CT) has developed into a standard clinical test for a variety of cardiovascular conditions. The emergence of cardiovascular CT during a period of dramatic increase in radiation exposure to the population from medical procedures and heightened concern about the subsequent potential cancer risk has led to intense scrutiny of the radiation burden of this new technique. This has hastened the development and implementation of dose reduction tools and prompted closer monitoring of patient dose. In an effort to aid the cardiovascular CT community in incorporating patient-centered radiation dose optimization and monitoring strategies into standard practice, the Society of Cardiovascular Computed Tomography has produced a guideline document to review available data and provide recommendations regarding interpretation of radiation dose indices and predictors of risk, appropriate use of scanner acquisition modes and settings, development of algorithms for dose optimization, and establishment of procedures for dose monitoring. PMID:21723512

  17. Benchmark Dose Modeling

    EPA Science Inventory

    Finite doses are employed in experimental toxicology studies. Under the traditional methodology, the point of departure (POD) value for low dose extrapolation is identified as one of these doses. Dose spacing necessarily precludes a more accurate description of the POD value. ...

  18. SU-D-BRB-07: Lipiodol Impact On Dose Distribution in Liver SBRT After TACE

    SciTech Connect

    Kawahara, D; Ozawa, S; Hioki, K; Suzuki, T; Lin, Y; Okumura, T; Ochi, Y; Nakashima, T; Ohno, Y; Kimura, T; Murakami, Y; Nagata, Y

    2015-06-15

    Purpose: Stereotactic body radiotherapy (SBRT) combining transarterial chemoembolization (TACE) with Lipiodol is expected to improve local control. This study aims to evaluate the impact of Lipiodol on dose distribution by comparing the dosimetric performance of the Acuros XB (AXB) algorithm, anisotropic analytical algorithm (AAA), and Monte Carlo (MC) method using a virtual heterogeneous phantom and a treatment plan for liver SBRT after TACE. Methods: The dose distributions calculated using AAA and AXB algorithm, both in Eclipse (ver. 11; Varian Medical Systems, Palo Alto, CA), and EGSnrc-MC were compared. First, the inhomogeneity correction accuracy of the AXB algorithm and AAA was evaluated by comparing the percent depth dose (PDD) obtained from the algorithms with that from the MC calculations using a virtual inhomogeneity phantom, which included water and Lipiodol. Second, the dose distribution of a liver SBRT patient treatment plan was compared between the calculation algorithms. Results In the virtual phantom, compared with the MC calculations, AAA underestimated the doses just before and in the Lipiodol region by 5.1% and 9.5%, respectively, and overestimated the doses behind the region by 6.0%. Furthermore, compared with the MC calculations, the AXB algorithm underestimated the doses just before and in the Lipiodol region by 4.5% and 10.5%, respectively, and overestimated the doses behind the region by 4.2%. In the SBRT plan, the AAA and AXB algorithm underestimated the maximum doses in the Lipiodol region by 9.0% in comparison with the MC calculations. In clinical cases, the dose enhancement in the Lipiodol region can approximately 10% increases in tumor dose without increase of dose to normal tissue. Conclusion: The MC method demonstrated a larger increase in the dose in the Lipiodol region than the AAA and AXB algorithm. Notably, dose enhancement were observed in the tumor area; this may lead to a clinical benefit.

  19. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  20. From cellular doses to average lung dose.

    PubMed

    Hofmann, W; Winkler-Heil, R

    2015-11-01

    Sensitive basal and secretory cells receive a wide range of doses in human bronchial and bronchiolar airways. Variations of cellular doses arise from the location of target cells in the bronchial epithelium of a given airway and the asymmetry and variability of airway dimensions of the lung among airways in a given airway generation and among bronchial and bronchiolar airway generations. To derive a single value for the average lung dose which can be related to epidemiologically observed lung cancer risk, appropriate weighting scenarios have to be applied. Potential biological weighting parameters are the relative frequency of target cells, the number of progenitor cells, the contribution of dose enhancement at airway bifurcations, the promotional effect of cigarette smoking and, finally, the application of appropriate regional apportionment factors. Depending on the choice of weighting parameters, detriment-weighted average lung doses can vary by a factor of up to 4 for given radon progeny exposure conditions.

  1. Brachytherapy source characterization for improved dose calculations using primary and scatter dose separation

    SciTech Connect

    Russell, Kellie R.; Carlsson Tedgren, Aasa K.; Ahnesjoe, Anders

    2005-09-15

    In brachytherapy, tissue heterogeneities, source shielding, and finite patient/phantom extensions affect both the primary and scatter dose distributions. The primary dose is, due to the short range of secondary electrons, dependent only on the distribution of material located on the ray line between the source and dose deposition site. The scatter dose depends on both the direct irradiation pattern and the distribution of material in a large volume surrounding the point of interest, i.e., a much larger volume must be included in calculations to integrate many small dose contributions. It is therefore of interest to consider different methods for the primary and the scatter dose calculation to improve calculation accuracy with limited computer resources. The algorithms in present clinical use ignore these effects causing systematic dose errors in brachytherapy treatment planning. In this work we review a primary and scatter dose separation formalism (PSS) for brachytherapy source characterization to support separate calculation of the primary and scatter dose contributions. We show how the resulting source characterization data can be used to drive more accurate dose calculations using collapsed cone superposition for scatter dose calculations. Two types of source characterization data paths are used: a direct Monte Carlo simulation in water phantoms with subsequent parameterization of the results, and an alternative data path built on processing of AAPM TG43 formatted data to provide similar parameter sets. The latter path is motivated of the large amounts of data already existing in the TG43 format. We demonstrate the PSS methods using both data paths for a clinical {sup 192}Ir source. Results are shown for two geometries: a finite but homogeneous water phantom, and a half-slab consisting of water and air. The dose distributions are compared to results from full Monte Carlo simulations and we show significant improvement in scatter dose calculations when the

  2. Distortions induced by radioactive seeds into interstitial brachytherapy dose distributions.

    PubMed

    Zhou, Chuanyu; Inanc, Feyzi; Modrick, Joseph M

    2004-12-01

    In a previous article, we presented development and verification of an integral transport equation-based deterministic algorithm for computing three-dimensional brachytherapy dose distributions. Recently, we have included fluorescence radiation physics and parallel computation to the standing algorithms so that we can compute dose distributions for a large set of seeds without resorting to the superposition methods. The introduction of parallel computing capability provided a means to compute the dose distribution for multiple seeds in a simultaneous manner. This provided a way to study strong heterogeneity and shadow effects induced by the presence of multiple seeds in an interstitial brachytherapy implant. This article presents the algorithm for computing fluorescence radiation, algorithm for parallel computing, and display results for an 81-seed implant that has a perfect and imperfect lattice. The dosimetry data for a single model 6711 seeds is presented for verification and heterogeneity factor computations using simultaneous and superposition techniques are presented.

  3. A simplified analytical random walk model for proton dose calculation

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    We propose an analytical random walk model for proton dose calculation in a laterally homogeneous medium. A formula for the spatial fluence distribution of primary protons is derived. The variance of the spatial distribution is in the form of a distance-squared law of the angular distribution. To improve the accuracy of dose calculation in the Bragg peak region, the energy spectrum of the protons is used. The accuracy is validated against Monte Carlo simulation in water phantoms with either air gaps or a slab of bone inserted. The algorithm accurately reflects the dose dependence on the depth of the bone and can deal with small-field dosimetry. We further applied the algorithm to patients’ cases in the highly heterogeneous head and pelvis sites and used a gamma test to show the reasonable accuracy of the algorithm in these sites. Our algorithm is fast for clinical use.

  4. Proton dose calculation based on in-air fluence measurements.

    PubMed

    Schaffner, Barbara

    2008-03-21

    Proton dose calculation algorithms--as well as photon and electron algorithms--are usually based on configuration measurements taken in a water phantom. The exceptions to this are proton dose calculation algorithms for modulated scanning beams. There, it is usual to measure the spot profiles in air. We use the concept of in-air configuration measurements also for scattering and uniform scanning (wobbling) proton delivery techniques. The dose calculation includes a separate step for the calculation of the in-air fluence distribution per energy layer. The in-air fluence calculation is specific to the technique and-to a lesser extent-design of the treatment machine. The actual dose calculation uses the in-air fluence as input and is generic for all proton machine designs and techniques. PMID:18367787

  5. Use of effective dose.

    PubMed

    Harrison, J D; Balonov, M; Martin, C J; Ortiz Lopez, P; Menzel, H-G; Simmonds, J R; Smith-Bindman, R; Wakeford, R

    2016-06-01

    International Commission on Radiological Protection (ICRP) Publication 103 provided a detailed explanation of the purpose and use of effective dose and equivalent dose to individual organs and tissues. Effective dose has proven to be a valuable and robust quantity for use in the implementation of protection principles. However, questions have arisen regarding practical applications, and a Task Group has been set up to consider issues of concern. This paper focusses on two key proposals developed by the Task Group that are under consideration by ICRP: (1) confusion will be avoided if equivalent dose is no longer used as a protection quantity, but regarded as an intermediate step in the calculation of effective dose. It would be more appropriate for limits for the avoidance of deterministic effects to the hands and feet, lens of the eye, and skin, to be set in terms of the quantity, absorbed dose (Gy) rather than equivalent dose (Sv). (2) Effective dose is in widespread use in medical practice as a measure of risk, thereby going beyond its intended purpose. While doses incurred at low levels of exposure may be measured or assessed with reasonable reliability, health effects have not been demonstrated reliably at such levels but are inferred. However, bearing in mind the uncertainties associated with risk projection to low doses or low dose rates, it may be considered reasonable to use effective dose as a rough indicator of possible risk, with the additional consideration of variation in risk with age, sex and population group. PMID:26980800

  6. Use of effective dose.

    PubMed

    Harrison, J D; Balonov, M; Martin, C J; Ortiz Lopez, P; Menzel, H-G; Simmonds, J R; Smith-Bindman, R; Wakeford, R

    2016-06-01

    International Commission on Radiological Protection (ICRP) Publication 103 provided a detailed explanation of the purpose and use of effective dose and equivalent dose to individual organs and tissues. Effective dose has proven to be a valuable and robust quantity for use in the implementation of protection principles. However, questions have arisen regarding practical applications, and a Task Group has been set up to consider issues of concern. This paper focusses on two key proposals developed by the Task Group that are under consideration by ICRP: (1) confusion will be avoided if equivalent dose is no longer used as a protection quantity, but regarded as an intermediate step in the calculation of effective dose. It would be more appropriate for limits for the avoidance of deterministic effects to the hands and feet, lens of the eye, and skin, to be set in terms of the quantity, absorbed dose (Gy) rather than equivalent dose (Sv). (2) Effective dose is in widespread use in medical practice as a measure of risk, thereby going beyond its intended purpose. While doses incurred at low levels of exposure may be measured or assessed with reasonable reliability, health effects have not been demonstrated reliably at such levels but are inferred. However, bearing in mind the uncertainties associated with risk projection to low doses or low dose rates, it may be considered reasonable to use effective dose as a rough indicator of possible risk, with the additional consideration of variation in risk with age, sex and population group.

  7. An expanded pharmacogenomics warfarin dosing table with utility in generalised dosing guidance.

    PubMed

    Shahabi, Payman; Scheinfeldt, Laura B; Lynch, Daniel E; Schmidlen, Tara J; Perreault, Sylvie; Keller, Margaret A; Kasper, Rachel; Wawak, Lisa; Jarvis, Joseph P; Gerry, Norman P; Gordon, Erynn S; Christman, Michael F; Dubé, Marie-Pierre; Gharani, Neda

    2016-08-01

    Pharmacogenomics (PGx) guided warfarin dosing, using a comprehensive dosing algorithm, is expected to improve dose optimisation and lower the risk of adverse drug reactions. As a complementary tool, a simple genotype-dosing table, such as in the US Food and Drug Administration (FDA) Coumadin drug label, may be utilised for general risk assessment of likely over- or under-anticoagulation on a standard dose of warfarin. This tool may be used as part of the clinical decision support for the interpretation of genetic data, serving as a first step in the anticoagulation therapy decision making process. Here we used a publicly available warfarin dosing calculator (www.warfarindosing.org) to create an expanded gene-based warfarin dosing table, the CPMC-WD table that includes nine genetic variants in CYP2C9, VKORC1, and CYP4F2. Using two datasets, a European American cohort (EUA, n=73) and the Quebec Warfarin Cohort (QWC, n=769), we show that the CPMC-WD table more accurately predicts therapeutic dose than the FDA table (51 % vs 33 %, respectively, in the EUA, McNemar's two-sided p=0.02; 52 % vs 37 % in the QWC, p<1×10(-6)). It also outperforms both the standard of care 5 mg/day dosing (51 % vs 34 % in the EUA, p=0.04; 52 % vs 31 % in the QWC, p<1×10(-6)) as well as a clinical-only algorithm (51 % vs 38 % in the EUA, trend p=0.11; 52 % vs 45 % in the QWC, p=0.003). This table offers a valuable update to the PGx dosing guideline in the drug label.

  8. High-dose-rate prostate brachytherapy inverse planning on dose-volume criteria by simulated annealing

    NASA Astrophysics Data System (ADS)

    Deist, T. M.; Gorissen, B. L.

    2016-02-01

    High-dose-rate brachytherapy is a tumor treatment method where a highly radioactive source is brought in close proximity to the tumor. In this paper we develop a simulated annealing algorithm to optimize the dwell times at preselected dwell positions to maximize tumor coverage under dose-volume constraints on the organs at risk. Compared to existing algorithms, our algorithm has advantages in terms of speed and objective value and does not require an expensive general purpose solver. Its success mainly depends on exploiting the efficiency of matrix multiplication and a careful selection of the neighboring states. In this paper we outline its details and make an in-depth comparison with existing methods using real patient data.

  9. High-dose-rate prostate brachytherapy inverse planning on dose-volume criteria by simulated annealing.

    PubMed

    Deist, T M; Gorissen, B L

    2016-02-01

    High-dose-rate brachytherapy is a tumor treatment method where a highly radioactive source is brought in close proximity to the tumor. In this paper we develop a simulated annealing algorithm to optimize the dwell times at preselected dwell positions to maximize tumor coverage under dose-volume constraints on the organs at risk. Compared to existing algorithms, our algorithm has advantages in terms of speed and objective value and does not require an expensive general purpose solver. Its success mainly depends on exploiting the efficiency of matrix multiplication and a careful selection of the neighboring states. In this paper we outline its details and make an in-depth comparison with existing methods using real patient data. PMID:26760757

  10. Comparison of computed tomography dose reporting software.

    PubMed

    Abdullah, A; Sun, Z; Pongnapang, N; Ng, K-H

    2012-08-01

    Computed tomography (CT) dose reporting software facilitates the estimation of doses to patients undergoing CT examinations. In this study, comparison of three software packages, i.e. CT-Expo (version 1.5, Medizinische Hochschule, Hannover, Germany), ImPACT CT Patients Dosimetry Calculator (version 0.99×, Imaging Performance Assessment on Computed Tomography, www.impactscan.org) and WinDose (version 2.1a, Wellhofer Dosimetry, Schwarzenbruck, Germany), has been made in terms of their calculation algorithm and the results of calculated doses. Estimations were performed for head, chest, abdominal and pelvic examinations based on the protocols recommended by European guidelines using single-slice CT (SSCT) (Siemens Somatom Plus 4, Erlangen, Germany) and multi-slice CT (MSCT) (Siemens Sensation 16, Erlangen, Germany) for software-based female and male phantoms. The results showed that there are some differences in final dose reporting provided by these software packages. There are deviations of effective doses produced by these software packages. Percentages of coefficient of variance range from 3.3 to 23.4 % in SSCT and from 10.6 to 43.8 % in MSCT. It is important that researchers state the name of the software that is used to estimate the various CT dose quantities. Users must also understand the equivalent terminologies between the information obtained from the CT console and the software packages in order to use the software correctly.

  11. Control algorithms for dynamic attenuators

    SciTech Connect

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-06-15

    Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not requirea priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current

  12. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  13. Fast proximity algorithm for MAP ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Li, Si; Krol, Andrzej; Shen, Lixin; Xu, Yuesheng

    2012-03-01

    We arrived at the fixed-point formulation of the total variation maximum a posteriori (MAP) regularized emission computed tomography (ECT) reconstruction problem and we proposed an iterative alternating scheme to numerically calculate the fixed point. We theoretically proved that our algorithm converges to unique solutions. Because the obtained algorithm exhibits slow convergence speed, we further developed the proximity algorithm in the transformed image space, i.e. the preconditioned proximity algorithm. We used the bias-noise curve method to select optimal regularization hyperparameters for both our algorithm and expectation maximization with total variation regularization (EM-TV). We showed in the numerical experiments that our proposed algorithms, with an appropriately selected preconditioner, outperformed conventional EM-TV algorithm in many critical aspects, such as comparatively very low noise and bias for Shepp-Logan phantom. This has major ramification for nuclear medicine because clinical implementation of our preconditioned fixed-point algorithms might result in very significant radiation dose reduction in the medical applications of emission tomography.

  14. [Indications for low-dose CT in the emergency setting].

    PubMed

    Poletti, Pierre-Alexandre; Andereggen, Elisabeth; Rutschmann, Olivier; de Perrot, Thomas; Caviezel, Alessandro; Platon, Alexandra

    2009-08-19

    CT delivers a large dose of radiation, especially in abdominal imaging. Recently, a low-dose abdominal CT protocol (low-dose CT) has been set-up in our institution. "Low-dose CT" is almost equivalent to a single standard abdominal radiograph in term of dose of radiation (about one sixth of those delivered by a standard CT). "Low-dose CT" is now used routinely in our emergency service in two main indications: patients with a suspicion of renal colic and those with right lower quadrant pain. It is obtained without intravenous contrast media. Oral contrast is given to patients with suspicion of appendicitis. "Low-dose CT" is used in the frame of well defined clinical algorithms, and does only replace standard CT when it can reach a comparable diagnostic quality.

  15. Fast convolution-superposition dose calculation on graphics hardware.

    PubMed

    Hissoiny, Sami; Ozell, Benoît; Després, Philippe

    2009-06-01

    The numerical calculation of dose is central to treatment planning in radiation therapy and is at the core of optimization strategies for modern delivery techniques. In a clinical environment, dose calculation algorithms are required to be accurate and fast. The accuracy is typically achieved through the integration of patient-specific data and extensive beam modeling, which generally results in slower algorithms. In order to alleviate execution speed problems, the authors have implemented a modern dose calculation algorithm on a massively parallel hardware architecture. More specifically, they have implemented a convolution-superposition photon beam dose calculation algorithm on a commodity graphics processing unit (GPU). They have investigated a simple porting scenario as well as slightly more complex GPU optimization strategies. They have achieved speed improvement factors ranging from 10 to 20 times with GPU implementations compared to central processing unit (CPU) implementations, with higher values corresponding to larger kernel and calculation grid sizes. In all cases, they preserved the numerical accuracy of the GPU calculations with respect to the CPU calculations. These results show that streaming architectures such as GPUs can significantly accelerate dose calculation algorithms and let envision benefits for numerically intensive processes such as optimizing strategies, in particular, for complex delivery techniques such as IMRT and are therapy.

  16. Neutron dose equivalent meter

    DOEpatents

    Olsher, Richard H.; Hsu, Hsiao-Hua; Casson, William H.; Vasilik, Dennis G.; Kleck, Jeffrey H.; Beverding, Anthony

    1996-01-01

    A neutron dose equivalent detector for measuring neutron dose capable of accurately responding to neutron energies according to published fluence to dose curves. The neutron dose equivalent meter has an inner sphere of polyethylene, with a middle shell overlying the inner sphere, the middle shell comprising RTV.RTM. silicone (organosiloxane) loaded with boron. An outer shell overlies the middle shell and comprises polyethylene loaded with tungsten. The neutron dose equivalent meter defines a channel through the outer shell, the middle shell, and the inner sphere for accepting a neutron counter tube. The outer shell is loaded with tungsten to provide neutron generation, increasing the neutron dose equivalent meter's response sensitivity above 8 MeV.

  17. Tissue heterogeneity in IMRT dose calculation for lung cancer.

    PubMed

    Pasciuti, Katia; Iaccarino, Giuseppe; Strigari, Lidia; Malatesta, Tiziana; Benassi, Marcello; Di Nallo, Anna Maria; Mirri, Alessandra; Pinzi, Valentina; Landoni, Valeria

    2011-01-01

    The aim of this study was to evaluate the differences in accuracy of dose calculation between 3 commonly used algorithms, the Pencil Beam algorithm (PB), the Anisotropic Analytical Algorithm (AAA), and the Collapsed Cone Convolution Superposition (CCCS) for intensity-modulated radiation therapy (IMRT). The 2D dose distributions obtained with the 3 algorithms were compared on each CT slice pixel by pixel, using the MATLAB code (The MathWorks, Natick, MA) and the agreement was assessed with the γ function. The effect of the differences on dose-volume histograms (DVHs), tumor control, and normal tissue complication probability (TCP and NTCP) were also evaluated, and its significance was quantified by using a nonparametric test. In general PB generates regions of over-dosage both in the lung and in the tumor area. These differences are not always in DVH of the lung, although the Wilcoxon test indicated significant differences in 2 of 4 patients. Disagreement in the lung region was also found when the Γ analysis was performed. The effect on TCP is less important than for NTCP because of the slope of the curve at the level of the dose of interest. The effect of dose calculation inaccuracy is patient-dependent and strongly related to beam geometry and to the localization of the tumor. When multiple intensity-modulated beams are used, the effect of the presence of the heterogeneity on dose distribution may not always be easily predictable. PMID:20970989

  18. Automated size-specific CT dose monitoring program: Assessing variability in CT dose

    SciTech Connect

    Christianson, Olav; Li Xiang; Frush, Donald; Samei, Ehsan

    2012-11-15

    Purpose: The potential health risks associated with low levels of ionizing radiation have created a movement in the radiology community to optimize computed tomography (CT) imaging protocols to use the lowest radiation dose possible without compromising the diagnostic usefulness of the images. Despite efforts to use appropriate and consistent radiation doses, studies suggest that a great deal of variability in radiation dose exists both within and between institutions for CT imaging. In this context, the authors have developed an automated size-specific radiation dose monitoring program for CT and used this program to assess variability in size-adjusted effective dose from CT imaging. Methods: The authors radiation dose monitoring program operates on an independent health insurance portability and accountability act compliant dosimetry server. Digital imaging and communication in medicine routing software is used to isolate dose report screen captures and scout images for all incoming CT studies. Effective dose conversion factors (k-factors) are determined based on the protocol and optical character recognition is used to extract the CT dose index and dose-length product. The patient's thickness is obtained by applying an adaptive thresholding algorithm to the scout images and is used to calculate the size-adjusted effective dose (ED{sub adj}). The radiation dose monitoring program was used to collect data on 6351 CT studies from three scanner models (GE Lightspeed Pro 16, GE Lightspeed VCT, and GE Definition CT750 HD) and two institutions over a one-month period and to analyze the variability in ED{sub adj} between scanner models and across institutions. Results: No significant difference was found between computer measurements of patient thickness and observer measurements (p= 0.17), and the average difference between the two methods was less than 4%. Applying the size correction resulted in ED{sub adj} that differed by up to 44% from effective dose estimates

  19. Doses from radiation exposure.

    PubMed

    Menzel, H-G; Harrison, J D

    2012-01-01

    Practical implementation of the International Commission on Radiological Protection's (ICRP) system of protection requires the availability of appropriate methods and data. The work of Committee 2 is concerned with the development of reference data and methods for the assessment of internal and external radiation exposure of workers and members of the public. This involves the development of reference biokinetic and dosimetric models, reference anatomical models of the human body, and reference anatomical and physiological data. Following ICRP's 2007 Recommendations, Committee 2 has focused on the provision of new reference dose coefficients for external and internal exposure. As well as specifying changes to the radiation and tissue weighting factors used in the calculation of protection quantities, the 2007 Recommendations introduced the use of reference anatomical phantoms based on medical imaging data, requiring explicit sex averaging of male and female organ-equivalent doses in the calculation of effective dose. In preparation for the calculation of new dose coefficients, Committee 2 and its task groups have provided updated nuclear decay data (ICRP Publication 107) and adult reference computational phantoms (ICRP Publication 110). New dose coefficients for external exposures of workers are complete (ICRP Publication 116), and work is in progress on a series of reports on internal dose coefficients to workers from inhaled and ingested radionuclides. Reference phantoms for children will also be provided and used in the calculation of dose coefficients for public exposures. Committee 2 also has task groups on exposures to radiation in space and on the use of effective dose.

  20. Variable depth recursion algorithm for leaf sequencing

    SciTech Connect

    Siochi, R. Alfredo C.

    2007-02-15

    The processes of extraction and sweep are basic segmentation steps that are used in leaf sequencing algorithms. A modified version of a commercial leaf sequencer changed the way that the extracts are selected and expanded the search space, but the modification maintained the basic search paradigm of evaluating multiple solutions, each one consisting of up to 12 extracts and a sweep sequence. While it generated the best solutions compared to other published algorithms, it used more computation time. A new, faster algorithm selects one extract at a time but calls itself as an evaluation function a user-specified number of times, after which it uses the bidirectional sweeping window algorithm as the final evaluation function. To achieve a performance comparable to that of the modified commercial leaf sequencer, 2-3 calls were needed, and in all test cases, there were only slight improvements beyond two calls. For the 13 clinical test maps, computation speeds improved by a factor between 12 and 43, depending on the constraints, namely the ability to interdigitate and the avoidance of the tongue-and-groove under dose. The new algorithm was compared to the original and modified versions of the commercial leaf sequencer. It was also compared to other published algorithms for 1400, random, 15x15, test maps with 3-16 intensity levels. In every single case the new algorithm provided the best solution.

  1. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  2. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  3. INSENS classification algorithm report

    SciTech Connect

    Hernandez, J.E.; Frerking, C.J.; Myers, D.W.

    1993-07-28

    This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.

  4. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  5. Fast reconstruction of low dose proton CT by sinogram interpolation

    NASA Astrophysics Data System (ADS)

    Hansen, David C.; Sangild Sørensen, Thomas; Rit, Simon

    2016-08-01

    Proton computed tomography (CT) has been demonstrated as a promising image modality in particle therapy planning. It can reduce errors in particle range calculations and consequently improve dose calculations. Obtaining a high imaging resolution has traditionally required computationally expensive iterative reconstruction techniques to account for the multiple scattering of the protons. Recently, techniques for direct reconstruction have been developed, but these require a higher imaging dose than the iterative methods. No previous work has compared the image quality of the direct and the iterative methods. In this article, we extend the methodology for direct reconstruction to be applicable for low imaging doses and compare the obtained results with three state-of-the-art iterative algorithms. We find that the direct method yields comparable resolution and image quality to the iterative methods, even at 1 mSv dose levels, while yielding a twentyfold speedup in reconstruction time over previously published iterative algorithms.

  6. Synchronized dynamic dose reconstruction

    SciTech Connect

    Litzenberg, Dale W.; Hadley, Scott W.; Tyagi, Neelam; Balter, James M.; Ten Haken, Randall K.; Chetty, Indrin J.

    2007-01-15

    Variations in target volume position between and during treatment fractions can lead to measurable differences in the dose distribution delivered to each patient. Current methods to estimate the ongoing cumulative delivered dose distribution make idealized assumptions about individual patient motion based on average motions observed in a population of patients. In the delivery of intensity modulated radiation therapy (IMRT) with a multi-leaf collimator (MLC), errors are introduced in both the implementation and delivery processes. In addition, target motion and MLC motion can lead to dosimetric errors from interplay effects. All of these effects may be of clinical importance. Here we present a method to compute delivered dose distributions for each treatment beam and fraction, which explicitly incorporates synchronized real-time patient motion data and real-time fluence and machine configuration data. This synchronized dynamic dose reconstruction method properly accounts for the two primary classes of errors that arise from delivering IMRT with an MLC: (a) Interplay errors between target volume motion and MLC motion, and (b) Implementation errors, such as dropped segments, dose over/under shoot, faulty leaf motors, tongue-and-groove effect, rounded leaf ends, and communications delays. These reconstructed dose fractions can then be combined to produce high-quality determinations of the dose distribution actually received to date, from which individualized adaptive treatment strategies can be determined.

  7. Know your dose: RADDOSE

    PubMed Central

    Paithankar, Karthik S.; Garman, Elspeth F.

    2010-01-01

    The program RADDOSE is widely used to compute the dose absorbed by a macromolecular crystal during an X-ray diffraction experiment. A number of factors affect the absorbed dose, including the incident X-ray flux density, the photon energy and the composition of the macromolecule and of the buffer in the crystal. An experimental dose limit for macromolecular crystallography (MX) of 30 MGy at 100 K has been reported, beyond which the biological information obtained may be compromised. Thus, for the planning of an optimized diffraction experiment the estimation of dose has become an additional tool. A number of approximations were made in the original version of RADDOSE. Recently, the code has been modified in order to take into account fluorescent X-­ray escape from the crystal (version 2) and the inclusion of incoherent (Compton) scattering into the dose calculation is now reported (version 3). The Compton cross-section, although negligible at the energies currently commonly used in MX, should be considered in dose calculations for incident energies above 20 keV. Calculations using version 3 of RADDOSE reinforce previous studies that predict a reduction in the absorbed dose when data are collected at higher energies compared with data collected at 12.4 keV. Hence, a longer irradiation lifetime for the sample can be achieved at these higher energies but this is at the cost of lower diffraction intensities. The parameter ‘diffraction-dose efficiency’, which is the diffracted intensity per absorbed dose, is revisited in an attempt to investigate the benefits and pitfalls of data collection using higher and lower energy radiation, particularly for thin crystals. PMID:20382991

  8. Calculating drug doses.

    PubMed

    2016-09-01

    Numeracy and calculation are key skills for nurses. As nurses are directly accountable for ensuring medicines are prescribed, dispensed and administered safely, they must be able to understand and calculate drug doses. PMID:27615351

  9. Ibuprofen dosing for children

    MedlinePlus

    Motrin; Advil ... Ibuprofen is a type of nonsteroidal anti-inflammatory drug (NSAID). It can help: Reduce aches, pain, sore ... Ibuprofen can be taken as liquid or chewable tablets. To give the correct dose, you need to ...

  10. Semioptimal practicable algorithmic cooling

    NASA Astrophysics Data System (ADS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon’s entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  11. Measurement verification of dose distributions in pulsed-dose rate brachytherapy in breast cancer

    PubMed Central

    Mantaj, Patrycja; Zwierzchowski, Grzegorz

    2013-01-01

    Aim The aim of the study was to verify the dose distribution optimisation method in pulsed brachytherapy. Background The pulsed-dose rate brachytherapy is a very important method of breast tumour treatment using a standard brachytheraphy equipment. The appropriate dose distribution round an implant is an important issue in treatment planning. Advanced computer systems of treatment planning are equipped with algorithms optimising dose distribution. Materials and methods The wax-paraffin phantom was constructed and seven applicators were placed within it. Two treatment plans (non-optimised, optimised) were prepared. The reference points were located at a distance of 5 mm from the applicators’ axis. Thermoluminescent detectors were placed in the phantom at suitable 35 chosen reference points. Results The dosimetry verification was carried out in 35 reference points for the plans before and after optimisation. Percentage difference for the plan without optimisation ranged from −8.5% to 1.4% and after optimisation from −8.3% to 0.01%. In 16 reference points, the calculated percentage difference was negative (from −8.5% to 1.3% for the plan without optimisation and from −8.3% to 0.8% for the optimised plan). In the remaining 19 points percentage difference was from 9.1% to 1.4% for the plan without optimisation and from 7.5% to 0.01% for the optimised plan. No statistically significant differences were found between calculated doses and doses measured at reference points in both dose distribution non-optimised treatment plans and optimised treatment plans. Conclusions No statistically significant differences were found in dose values at reference points between doses calculated by the treatment planning system and those measured by TLDs. This proves the consistency between the measurements and the calculations. PMID:24416545

  12. Segmentation of individual ribs from low-dose chest CT

    NASA Astrophysics Data System (ADS)

    Lee, Jaesung; Reeves, Anthony P.

    2010-03-01

    Segmentation of individual ribs and other bone structures in chest CT images is important for anatomical analysis, as the segmented ribs may be used as a baseline reference for locating organs within a chest as well as for identification and measurement of any geometric abnormalities in the bone. In this paper we present a fully automated algorithm to segment the individual ribs from low-dose chest CT scans. The proposed algorithm consists of four main stages. First, all the high-intensity bone structure present in the scan is segmented. Second, the centerline of the spinal canal is identified using a distance transform of the bone segmentation. Then, the seed region for every rib is detected based on the identified centerline, and each rib is grown from the seed region and separated from the corresponding vertebra. This algorithm was evaluated using 115 low-dose chest CT scans from public databases with various slice thicknesses. The algorithm parameters were determined using 5 scans, and remaining 110 scans were used to evaluate the performance of the segmentation algorithm. The outcome of the algorithm was inspected by an author for the correctness of the segmentation. The results indicate that over 98% of the individual ribs were correctly segmented with the proposed algorithm.

  13. Quantitative validation of a new coregistration algorithm

    SciTech Connect

    Pickar, R.D.; Esser, P.D.; Pozniakoff, T.A.; Van Heertum, R.L.; Stoddart, H.A. Jr.

    1995-08-01

    A new coregistration software package, Neuro9OO Image Coregistration software, has been developed specifically for nuclear medicine. With this algorithm, the correlation coefficient is maximized between volumes generated from sets of transaxial slices. No localization markers or segmented surfaces are needed. The coregistration program was evaluated for translational and rotational registration accuracy. A Tc-99m HM-PAO split-dose study (0.53 mCi low dose, L, and 1.01 mCi high dose, H) was simulated with a Hoffman Brain Phantom with five fiducial markers. Translation error was determined by a shift in image centroid, and rotation error was determined by a simplified two-axis approach. Changes in registration accuracy were measured with respect to: (1) slice spacing, using the four different combinations LL, LH, HL, HH, (2) translational and rotational misalignment before coregistration, (3) changes in the step size of the iterative parameters. In all the cases the algorithm converged with only small difference in translation offset, 0 and 0. At 6 nun slice spacing, translational efforts ranged from 0.9 to 2.8 mm (system resolution at 100 mm, 6.8 mm). The converged parameters showed little sensitivity to count density. In addition the correlation coefficient increased with decreasing iterative step size, as expected. From these experiments, the authors found that this algorithm based on the maximization of the correlation coefficient between studies was an accurate way to coregister SPECT brain images.

  14. Independent calculation of dose from a helical TomoTherapy unit.

    PubMed

    Gibbons, John P; Smith, Koren; Cheek, Dennis; Rosen, Isaac

    2009-02-05

    A new calculation algorithm has been developed for independently verifying doses calculated by the TomoTherapy Hi.Art treatment planning system (TPS). The algorithm is designed to confirm the dose to a point in a high dose, low dose-gradient region. Patient data used by the algorithm include the radiological depth to the point for each projection angle and the treatment sinogram file controlling the leaf opening time for each projection. The algorithm uses common dosimetric functions [tissue phantom ratio (TPR) and output factor (Scp)] for the central axis combined with lateral and longitudinal beam profile data to quantify the off-axis dose dependence. Machine data for the dosimetric functions were measured on the Hi.Art machine and simulated using the TPS. Point dose calculations were made for several test phantoms and for 97 patient treatment plans using the simulated machine data. Comparisons with TPS-predicted point doses for the phantom treatment plans demonstrated agreement within 2% for both on-axis and off-axis planning target volumes (PTVs). Comparisons with TPS-predicted point doses for the patient treatment plans also showed good agreement. For calculations at sites other than lung and superficial PTVs, agreement between the calculations was within 2% for 94% of the patient calculations (64 of 68). Calculations within lung and superficial PTVs overestimated the dose by an average of 3.1% (sigma=2.4%) and 3.2% (sigma=2.2%), respectively. Systematic errors within lung are probably due to the weakness of the algorithm in correcting for missing tissue and/or tissue density heterogeneities. Errors encountered within superficial PTVs probably result from the algorithm overestimating the scatter dose within the patient. Our results demonstrate that for the majority of cases, the algorithm could be used without further refinement to independently verify patient treatment plans.

  15. Analysis of the dose calculation accuracy for IMRT in lung: a 2D approach.

    PubMed

    Dvorak, Pavel; Stock, Markus; Kroupa, Bernhard; Bogner, Joachim; Georg, Dietmar

    2007-01-01

    The purpose of this study was to compare the dosimetric accuracy of IMRT plans for targets in lung with the accuracy of standard uniform-intensity conformal radiotherapy for different dose calculation algorithms. Tests were performed utilizing a special phantom manufactured from cork and polystyrene in order to quantify the uncertainty of two commercial TPS for IMRT in the lung. Ionization and film measurements were performed at various measuring points/planes. Additionally, single-beam and uniform-intensity multiple-beam tests were performed, in order to investigate deviations due to other characteristics of IMRT. Helax-TMS V6.1(A) was tested for 6, 10 and 25 MV and BrainSCAN 5.2 for 6 MV photon beams, respectively. Pencil beam (PB) with simple inhomogeneity correction and 'collapsed cone' (CC) algorithms were applied for dose calculations. However, the latter was not incorporated during optimization hence only post-optimization recalculation was tested. Two-dimensional dose distributions were evaluated applying the gamma index concept. Conformal plans showed the same accuracy as IMRT plans. Ionization chamber measurements detected deviations of up to 5% when a PB algorithm was used for IMRT dose calculations. Significant improvement (deviations approximately 2%) was observed when IMRT plans were recalculated with the CC algorithm, especially for the highest nominal energy. All gamma evaluations confirmed substantial improvement with the CC algorithm in 2D. While PB dose distributions showed most discrepancies in lower (<50%) and high (>90%) dose regions, the CC dose distributions deviated mainly in the high dose gradient (20-80%) region. The advantages of IMRT (conformity, intra-target dose control) should be counterbalanced with possible calculation inaccuracies for targets in the lung. Until no superior dose calculation algorithms are involved in the iterative optimization process it should be used with great care. When only PB algorithm with simple

  16. Dose-response model for teratological experiments involving quantal responses

    SciTech Connect

    Rai, K.; Van Ryzin, J.

    1985-03-01

    This paper introduces a dose-response model for teratological quantal response data where the probability of response for an offspring from a female at a given dose varies with the litter size. The maximum likelihood estimators for the parameters of the model are given as the solution of a nonlinear iterative algorithm. Two methods of low-dose extrapolation are presented, one based on the litter size distribution and the other a conservative method. The resulting procedures are then applied to a teratological data set from the literature.

  17. Utirik Atoll Dose Assessment

    SciTech Connect

    Robison, W.L.; Conrado, C.L.; Bogen, K.T

    1999-10-06

    On March 1, 1954, radioactive fallout from the nuclear test at Bikini Atoll code-named BRAVO was deposited on Utirik Atoll which lies about 187 km (300 miles) east of Bikini Atoll. The residents of Utirik were evacuated three days after the fallout started and returned to their atoll in May 1954. In this report we provide a final dose assessment for current conditions at the atoll based on extensive data generated from samples collected in 1993 and 1994. The estimated population average maximum annual effective dose using a diet including imported foods is 0.037 mSv y{sup -1} (3.7 mrem y{sup -1}). The 95% confidence limits are within a factor of three of their population average value. The population average integrated effective dose over 30-, 50-, and 70-y is 0.84 mSv (84, mrem), 1.2 mSv (120 mrem), and 1.4 mSv (140 mrem), respectively. The 95% confidence limits on the population-average value post 1998, i.e., the 30-, 50-, and 70-y integral doses, are within a factor of two of the mean value and are independent of time, t, for t > 5 y. Cesium-137 ({sup 137}Cs) is the radionuclide that contributes most of this dose, mostly through the terrestrial food chain and secondarily from external gamma exposure. The dose from weapons-related radionuclides is very low and of no consequence to the health of the population. The annual background doses in the U. S. and Europe are 3.0 mSv (300 mrem), and 2.4 mSv (240 mrem), respectively. The annual background dose in the Marshall Islands is estimated to be 1.4 mSv (140 mrem). The total estimated combined Marshall Islands background dose plus the weapons-related dose is about 1.5 mSv y{sup -1} (150 mrem y{sup -1}) which can be directly compared to the annual background effective dose of 3.0 mSv y{sup -1} (300 mrem y{sup -1}) for the U. S. and 2.4 mSv y{sup -1} (240 mrem y{sup -1}) for Europe. Moreover, the doses listed in this report are based only on the radiological decay of {sup 137}Cs (30.1 y half-life) and other

  18. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  19. Algorithms for optimizing CT fluence control

    NASA Astrophysics Data System (ADS)

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-03-01

    The ability to customize the incident x-ray fluence in CT via beam-shaping filters or mA modulation is known to improve image quality and/or reduce radiation dose. Previous work has shown that complete control of x-ray fluence (ray-by-ray fluence modulation) would further improve dose efficiency. While complete control of fluence is not currently possible, emerging concepts such as dynamic attenuators and inverse-geometry CT allow nearly complete control to be realized. Optimally using ray-by-ray fluence modulation requires solving a very high-dimensional optimization problem. Most optimization techniques fail or only provide approximate solutions. We present efficient algorithms for minimizing mean or peak variance given a fixed dose limit. The reductions in variance can easily be translated to reduction in dose, if the original variance met image quality requirements. For mean variance, a closed form solution is derived. The peak variance problem is recast as iterated, weighted mean variance minimization, and at each iteration it is possible to bound the distance to the optimal solution. We apply our algorithms in simulations of scans of the thorax and abdomen. Peak variance reductions of 45% and 65% are demonstrated in the abdomen and thorax, respectively, compared to a bowtie filter alone. Mean variance shows smaller gains (about 15%).

  20. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  1. A simple analytical method for heterogeneity corrections in low dose rate prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Hueso-González, Fernando; Vijande, Javier; Ballester, Facundo; Perez-Calatayud, Jose; Siebert, Frank-André

    2015-07-01

    In low energy brachytherapy, the presence of tissue heterogeneities contributes significantly to the discrepancies observed between treatment plan and delivered dose. In this work, we present a simplified analytical dose calculation algorithm for heterogeneous tissue. We compare it with Monte Carlo computations and assess its suitability for integration in clinical treatment planning systems. The algorithm, named as RayStretch, is based on the classic equivalent path length method and TG-43 reference data. Analytical and Monte Carlo dose calculations using Penelope2008 are compared for a benchmark case: a prostate patient with calcifications. The results show a remarkable agreement between simulation and algorithm, the latter having, in addition, a high calculation speed. The proposed analytical model is compatible with clinical real-time treatment planning systems based on TG-43 consensus datasets for improving dose calculation and treatment quality in heterogeneous tissue. Moreover, the algorithm is applicable for any type of heterogeneities.

  2. [High dose rate brachytherapy].

    PubMed

    Aisen, S; Carvalho, H A; Chavantes, M C; Esteves, S C; Haddad, C M; Permonian, A C; Taier, M do C; Marinheiro, R C; Feriancic, C V

    1992-01-01

    The high dose rate brachytherapy uses a single source os 192Ir with 10Ci of nominal activity in a remote afterloading machine. This technique allows an outpatient treatment, without the inconveniences of the conventional low dose rate brachytherapy such as use of general anesthesia, rhachianesthesia, prolonged immobilization, and personal exposition to radiation. The radiotherapy department is now studying 5 basic treatment schemes concerning carcinomas of the uterine cervix, endometrium, lung, esophagus and central nervous system tumors. With the Micro Selectron HDR, 257 treatment sessions were done in 90 patients. Mostly were treated with weekly fractions, receiving a total of three to four treatments each. No complications were observed neither during nor after the procedure. Doses, fraction and ideal associations still have to be studied, so that a higher therapeutic ratio can be reached.

  3. [Role of individualized dosing in oncology: are we there yet?].

    PubMed

    Joerger, Markus

    2013-12-31

    There is no personalized anticancer treatment without individualized dosing. Clearly, BSA-based chemotherapy dosing does not improve the substantial pharmacokinetic variability of individual drugs. Similarly, the new oral kinase inhibitors and endocrine compounds exhibit a marked pharmacokinetic variability due to variable absorption, hepatic metabolism and potential drug-drug and food-drug interactions. Therapeutic drug monitoring and genotyping may allow more individualized dosing of anticancer compounds in the future. However, the large number of anticancer drugs that are newly approved or undergoing preclinical development are a substantial challenge for the development of substance-specific dosing algorithms. It will be necessary to implement individual dosing strategies in the early clinical development of new anticancer drugs to inform clinicians at the time of first approval.

  4. Dose Reduction Techniques

    SciTech Connect

    WAGGONER, L.O.

    2000-05-16

    As radiation safety specialists, one of the things we are required to do is evaluate tools, equipment, materials and work practices and decide whether the use of these products or work practices will reduce radiation dose or risk to the environment. There is a tendency for many workers that work with radioactive material to accomplish radiological work the same way they have always done it rather than look for new technology or change their work practices. New technology is being developed all the time that can make radiological work easier and result in less radiation dose to the worker or reduce the possibility that contamination will be spread to the environment. As we discuss the various tools and techniques that reduce radiation dose, keep in mind that the radiological controls should be reasonable. We can not always get the dose to zero, so we must try to accomplish the work efficiently and cost-effectively. There are times we may have to accept there is only so much you can do. The goal is to do the smart things that protect the worker but do not hinder him while the task is being accomplished. In addition, we should not demand that large amounts of money be spent for equipment that has marginal value in order to save a few millirem. We have broken the handout into sections that should simplify the presentation. Time, distance, shielding, and source reduction are methods used to reduce dose and are covered in Part I on work execution. We then look at operational considerations, radiological design parameters, and discuss the characteristics of personnel who deal with ALARA. This handout should give you an overview of what it takes to have an effective dose reduction program.

  5. Dose Calculation Spreadsheet

    1997-06-10

    VENTSAR XL is an EXCEL Spreadsheet that can be used to calculate downwind doses as a result of a hypothetical atmospheric release. Both building effects and plume rise may be considered. VENTSAR XL will run using any version of Microsoft EXCEL version 4.0 or later. Macros (the programming language of EXCEL) was used to automate the calculations. The user enters a minimal amount of input and the code calculates the resulting concentrations and doses atmore » various downwind distances as specified by the user.« less

  6. Comparison of the Performance of the Warfarin Pharmacogenetics Algorithms in Patients with Surgery of Heart Valve Replacement and Heart Valvuloplasty.

    PubMed

    Xu, Hang; Su, Shi; Tang, Wuji; Wei, Meng; Wang, Tao; Wang, Dongjin; Ge, Weihong

    2015-09-01

    A large number of warfarin pharmacogenetics algorithms have been published. Our research was aimed to evaluate the performance of the selected pharmacogenetic algorithms in patients with surgery of heart valve replacement and heart valvuloplasty during the phase of initial and stable anticoagulation treatment. 10 pharmacogenetic algorithms were selected by searching PubMed. We compared the performance of the selected algorithms in a cohort of 193 patients during the phase of initial and stable anticoagulation therapy. Predicted dose was compared to therapeutic dose by using a predicted dose percentage that falls within 20% threshold of the actual dose (percentage within 20%) and mean absolute error (MAE). The average warfarin dose for patients was 3.05±1.23mg/day for initial treatment and 3.45±1.18mg/day for stable treatment. The percentages of the predicted dose within 20% of the therapeutic dose were 44.0±8.8% and 44.6±9.7% for the initial and stable phases, respectively. The MAEs of the selected algorithms were 0.85±0.18mg/day and 0.93±0.19mg/day, respectively. All algorithms had better performance in the ideal group than in the low dose and high dose groups. The only exception is the Wadelius et al. algorithm, which had better performance in the high dose group. The algorithms had similar performance except for the Wadelius et al. and Miao et al. algorithms, which had poor accuracy in our study cohort. The Gage et al. algorithm had better performance in both phases of initial and stable treatment. Algorithms had relatively higher accuracy in the >50years group of patients on the stable phase.

  7. Assessment of phase based dose modulation for improved dose efficiency in cardiac CT on an anthropomorphic motion phantom

    NASA Astrophysics Data System (ADS)

    Budde, Adam; Nilsen, Roy; Nett, Brian

    2014-03-01

    State of the art automatic exposure control modulates the tube current across view angle and Z based on patient anatomy for use in axial full scan reconstructions. Cardiac CT, however, uses a fundamentally different image reconstruction that applies a temporal weighting to reduce motion artifacts. This paper describes a phase based mA modulation that goes beyond axial and ECG modulation; it uses knowledge of the temporal view weighting applied within the reconstruction algorithm to improve dose efficiency in cardiac CT scanning. Using physical phantoms and synthetic noise emulation, we measure how knowledge of sinogram temporal weighting and the prescribed cardiac phase can be used to improve dose efficiency. First, we validated that a synthetic CT noise emulation method produced realistic image noise. Next, we used the CT noise emulation method to simulate mA modulation on scans of a physical anthropomorphic phantom where a motion profile corresponding to a heart rate of 60 beats per minute was used. The CT noise emulation method matched noise to lower dose scans across the image within 1.5% relative error. Using this noise emulation method to simulate modulating the mA while keeping the total dose constant, the image variance was reduced by an average of 11.9% on a scan with 50 msec padding, demonstrating improved dose efficiency. Radiation dose reduction in cardiac CT can be achieved while maintaining the same level of image noise through phase based dose modulation that incorporates knowledge of the cardiac reconstruction algorithm.

  8. Bayesian population modeling of drug dosing adherence.

    PubMed

    Fellows, Kelly; Stoneking, Colin J; Ramanathan, Murali

    2015-10-01

    Adherence is a frequent contributing factor to variations in drug concentrations and efficacy. The purpose of this work was to develop an integrated population model to describe variation in adherence, dose-timing deviations, overdosing and persistence to dosing regimens. The hybrid Markov chain-von Mises method for modeling adherence in individual subjects was extended to the population setting using a Bayesian approach. Four integrated population models for overall adherence, the two-state Markov chain transition parameters, dose-timing deviations, overdosing and persistence were formulated and critically compared. The Markov chain-Monte Carlo algorithm was used for identifying distribution parameters and for simulations. The model was challenged with medication event monitoring system data for 207 hypertension patients. The four Bayesian models demonstrated good mixing and convergence characteristics. The distributions of adherence, dose-timing deviations, overdosing and persistence were markedly non-normal and diverse. The models varied in complexity and the method used to incorporate inter-dependence with the preceding dose in the two-state Markov chain. The model that incorporated a cooperativity term for inter-dependence and a hyperbolic parameterization of the transition matrix probabilities was identified as the preferred model over the alternatives. The simulated probability densities from the model satisfactorily fit the observed probability distributions of adherence, dose-timing deviations, overdosing and persistence parameters in the sample patients. The model also adequately described the median and observed quartiles for these parameters. The Bayesian model for adherence provides a parsimonious, yet integrated, description of adherence in populations. It may find potential applications in clinical trial simulations and pharmacokinetic-pharmacodynamic modeling. PMID:26319548

  9. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  10. Computing proton dose to irregularly moving targets

    NASA Astrophysics Data System (ADS)

    Phillips, Justin; Gueorguiev, Gueorgui; Shackleford, James A.; Grassberger, Clemens; Dowdell, Stephen; Paganetti, Harald; Sharp, Gregory C.

    2014-08-01

    phantom (2 mm, 2%), and 90.8% (3 mm, 3%)for the patient data. Conclusions: We have demonstrated a method for accurately reproducing proton dose to an irregularly moving target from a single CT image. We believe this algorithm could prove a useful tool to study the dosimetric impact of baseline shifts either before or during treatment.

  11. Computing Proton Dose to Irregularly Moving Targets

    PubMed Central

    Phillips, Justin; Gueorguiev, Gueorgui; Shackleford, James A.; Grassberger, Clemens; Dowdell, Stephen; Paganetti, Harald; Sharp, Gregory C.

    2014-01-01

    phantom (2 mm, 2%), and 90.8% (3 mm, 3%)for the patient data. Conclusions We have demonstrated a method for accurately reproducing proton dose to an irregularly moving target from a single CT image. We believe this algorithm could prove a useful tool to study the dosimetric impact of baseline shifts either before or during treatment. PMID:25029239

  12. Effect of deformable registration on the dose calculated in radiation therapy planning CT scans of lung cancer patients

    SciTech Connect

    Cunliffe, Alexandra R.; Armato, Samuel G.; White, Bradley; Justusson, Julia; Contee, Clay; Malik, Renuka; Al-Hallaq, Hania A.

    2015-01-15

    Purpose: To characterize the effects of deformable image registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60 Gy, 2 Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pretherapy (4–75 days) CT scan and a treatment planning scan with an associated dose map were collected. To establish correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pretherapy scans then were coregistered with planning scans (and associated dose maps) using the demons deformable registration algorithm and two variants of the Fraunhofer MEVIS algorithm (“Fast” and “EMPIRE10”). Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from each of the three algorithms. The Euclidean distance between manually and automatically mapped landmark points (d{sub E}) and the absolute difference in planned dose (|ΔD|) were calculated. Using regression modeling, |ΔD| was modeled as a function of d{sub E}, dose (D), dose standard deviation (SD{sub dose}) in an eight-pixel neighborhood, and the registration algorithm used. Results: Over 1400 landmark point pairs were identified, with 58–93 (median: 84) points identified per patient. Average |ΔD| across patients was 3.5 Gy (range: 0.9–10.6 Gy). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, with an average d{sub E} across patients of 5.2 mm (compared with >7 mm for the other two algorithms). Consequently, average |ΔD| was also lowest using the Fraunhofer MEVIS EMPIRE10 algorithm. |ΔD| increased significantly as a function of d{sub E} (0.42 Gy/mm), D (0.05 Gy/Gy), SD{sub dose} (1.4 Gy/Gy), and the algorithm used (≤1 Gy). Conclusions: An

  13. LADTAPXL Aqueous Dose Spreadsheet

    SciTech Connect

    Hamby, David M.; Simpkins, Ali A.; Jannik, G. T.

    1999-08-10

    LADTAPXL is an EXCEL spreadsheet model of the NRC computer code LADTAP. LADTAPXL calculates maximally exposed individual and population doses from chronic liquid releases. Environmental pathways include external exposure resulting from recreational activities on the Savannah River and ingestion of water, fish, and invertebrates of Savannah River origin.

  14. When is a dose not a dose

    SciTech Connect

    Bond, V.P.

    1991-01-01

    Although an enormous amount of progress has been made in the fields of radiation protection and risk assessment, a number of significant problems remain. The one problem which transcends all the rest, and which has been subject to considerable misunderstanding, involves what has come to be known as the 'linear non-threshold hypothesis', or 'linear hypothesis'. Particularly troublesome has been the interpretation that any amount of radiation can cause an increase in the excess incidence of cancer. The linear hypothesis has dominated radiation protection philosophy for more than three decades, with enormous financial, societal and political impacts and has engendered an almost morbid fear of low-level exposure to ionizing radiation in large segments of the population. This document presents a different interpretation of the linear hypothesis. The basis for this view lies in the evolution of dose-response functions, particularly with respect to their use initially in the context of early acute effects, and then for the late effects, carcinogenesis and mutagenesis. 11 refs., 4 figs. (MHB)

  15. In vivo verification of radiation dose delivered to healthy tissue during radiotherapy for breast cancer

    NASA Astrophysics Data System (ADS)

    Lonski, P.; Taylor, M. L.; Hackworth, W.; Phipps, A.; Franich, R. D.; Kron, T.

    2014-03-01

    Different treatment planning system (TPS) algorithms calculate radiation dose in different ways. This work compares measurements made in vivo to the dose calculated at out-of-field locations using three different commercially available algorithms in the Eclipse treatment planning system. LiF: Mg, Cu, P thermoluminescent dosimeter (TLD) chips were placed with 1 cm build-up at six locations on the contralateral side of 5 patients undergoing radiotherapy for breast cancer. TLD readings were compared to calculations of Pencil Beam Convolution (PBC), Anisotropic Analytical Algorithm (AAA) and Acuros XB (XB). AAA predicted zero dose at points beyond 16 cm from the field edge. In the same region PBC returned an unrealistically constant result independent of distance and XB showed good agreement to measured data although consistently underestimated by ~0.1 % of the prescription dose. At points closer to the field edge XB was the superior algorithm, exhibiting agreement with TLD results to within 15 % of measured dose. Both AAA and PBC showed mixed agreement, with overall discrepancies considerably greater than XB. While XB is certainly the preferable algorithm, it should be noted that TPS algorithms in general are not designed to calculate dose at peripheral locations and calculation results in such regions should be treated with caution.

  16. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  17. Low-Dose Carcinogenicity Studies

    EPA Science Inventory

    One of the major deficiencies of cancer risk assessments is the lack of low-dose carcinogenicity data. Most assessments require extrapolation from high to low doses, which is subject to various uncertainties. Only 4 low-dose carcinogenicity studies and 5 low-dose biomarker/pre-n...

  18. A Novel Admixture-Based Pharmacogenetic Approach to Refine Warfarin Dosing in Caribbean Hispanics

    PubMed Central

    Claudio-Campos, Karla; Rivera-Miranda, Giselle; Bermúdez-Bosch, Luis; Renta, Jessicca Y.; Cadilla, Carmen L.; Cruz, Iadelisse; Feliu, Juan F.; Vergara, Cunegundo; Ruaño, Gualberto

    2016-01-01

    Aim This study is aimed at developing a novel admixture-adjusted pharmacogenomic approach to individually refine warfarin dosing in Caribbean Hispanic patients. Patients & Methods A multiple linear regression analysis of effective warfarin doses versus relevant genotypes, admixture, clinical and demographic factors was performed in 255 patients and further validated externally in another cohort of 55 individuals. Results The admixture-adjusted, genotype-guided warfarin dosing refinement algorithm developed in Caribbean Hispanics showed better predictability (R2 = 0.70, MAE = 0.72mg/day) than a clinical algorithm that excluded genotypes and admixture (R2 = 0.60, MAE = 0.99mg/day), and outperformed two prior pharmacogenetic algorithms in predicting effective dose in this population. For patients at the highest risk of adverse events, 45.5% of the dose predictions using the developed pharmacogenetic model resulted in ideal dose as compared with only 29% when using the clinical non-genetic algorithm (p<0.001). The admixture-driven pharmacogenetic algorithm predicted 58% of warfarin dose variance when externally validated in 55 individuals from an independent validation cohort (MAE = 0.89 mg/day, 24% mean bias). Conclusions Results supported our rationale to incorporate individual’s genotypes and unique admixture metrics into pharmacogenetic refinement models in order to increase predictability when expanding them to admixed populations like Caribbean Hispanics. Trial Registration ClinicalTrials.gov NCT01318057 PMID:26745506

  19. A spatially encoded dose difference maximal intensity projection map for patient dose evaluation: A new first line patient quality assurance tool

    SciTech Connect

    Hu Weigang; Graff, Pierre; Boettger, Thomas; Pouliot, Jean; and others

    2011-04-15

    Purpose: To develop a spatially encoded dose difference maximal intensity projection (DD-MIP) as an online patient dose evaluation tool for visualizing the dose differences between the planning dose and dose on the treatment day. Methods: Megavoltage cone-beam CT (MVCBCT) images acquired on the treatment day are used for generating the dose difference index. Each index is represented by different colors for underdose, acceptable, and overdose regions. A maximal intensity projection (MIP) algorithm is developed to compress all the information of an arbitrary 3D dose difference index into a 2D DD-MIP image. In such an algorithm, a distance transformation is generated based on the planning CT. Then, two new volumes representing the overdose and underdose regions of the dose difference index are encoded with the distance transformation map. The distance-encoded indices of each volume are normalized using the skin distance obtained on the planning CT. After that, two MIPs are generated based on the underdose and overdose volumes with green-to-blue and green-to-red lookup tables, respectively. Finally, the two MIPs are merged with an appropriate transparency level and rendered in planning CT images. Results: The spatially encoded DD-MIP was implemented in a dose-guided radiotherapy prototype and tested on 33 MVCBCT images from six patients. The user can easily establish the threshold for the overdose and underdose. A 3% difference between the treatment and planning dose was used as the threshold in the study; hence, the DD-MIP shows red or blue color for the dose difference >3% or {<=}3%, respectively. With such a method, the overdose and underdose regions can be visualized and distinguished without being overshadowed by superficial dose differences. Conclusions: A DD-MIP algorithm was developed that compresses information from 3D into a single or two orthogonal projections while hinting the user whether the dose difference is on the skin surface or deeper.

  20. Dosimetric algorithm to reproduce isodose curves obtained from a LINAC.

    PubMed

    Estrada Espinosa, Julio Cesar; Martínez Ovalle, Segundo Agustín; Pereira Benavides, Cinthia Kotzian

    2014-01-01

    In this work isodose curves are obtained by the use of a new dosimetric algorithm using numerical data from percentage depth dose (PDD) and the maximum absorbed dose profile, calculated by Monte Carlo in a 18 MV LINAC. The software allows reproducing the absorbed dose percentage in the whole irradiated volume quickly and with a good approximation. To validate results an 18 MV LINAC with a whole geometry and a water phantom were constructed. On this construction, the distinct simulations were processed by the MCNPX code and then obtained the PDD and profiles for the whole depths of the radiation beam. The results data were used by the code to produce the dose percentages in any point of the irradiated volume. The absorbed dose for any voxel's size was also reproduced at any point of the irradiated volume, even when the voxels are considered to be of a pixel's size. The dosimetric algorithm is able to reproduce the absorbed dose induced by a radiation beam over a water phantom, considering PDD and profiles, whose maximum percent value is in the build-up region. Calculation time for the algorithm is only a few seconds, compared with the days taken when it is carried out by Monte Carlo. PMID:25045398

  1. Dosimetric Algorithm to Reproduce Isodose Curves Obtained from a LINAC

    PubMed Central

    Estrada Espinosa, Julio Cesar; Martínez Ovalle, Segundo Agustín; Pereira Benavides, Cinthia Kotzian

    2014-01-01

    In this work isodose curves are obtained by the use of a new dosimetric algorithm using numerical data from percentage depth dose (PDD) and the maximum absorbed dose profile, calculated by Monte Carlo in a 18 MV LINAC. The software allows reproducing the absorbed dose percentage in the whole irradiated volume quickly and with a good approximation. To validate results an 18 MV LINAC with a whole geometry and a water phantom were constructed. On this construction, the distinct simulations were processed by the MCNPX code and then obtained the PDD and profiles for the whole depths of the radiation beam. The results data were used by the code to produce the dose percentages in any point of the irradiated volume. The absorbed dose for any voxel's size was also reproduced at any point of the irradiated volume, even when the voxels are considered to be of a pixel's size. The dosimetric algorithm is able to reproduce the absorbed dose induced by a radiation beam over a water phantom, considering PDD and profiles, whose maximum percent value is in the build-up region. Calculation time for the algorithm is only a few seconds, compared with the days taken when it is carried out by Monte Carlo. PMID:25045398

  2. Radiotherapy dosimetry of the breast : Factors affecting dose to the patient

    NASA Astrophysics Data System (ADS)

    Venables, Karen

    The work presented in this thesis developed from the quality assurance for the START trial. This provided a unique opportunity to perform measurements in a breast shaped, soft tissue equivalent phantom in over 40 hospitals, representing 75% of the radiotherapy centres in the UK. A wide range of planning systems using beam library, beam model and convolution based algorithms have been compared. The limitations of current algorithms as applied to breast radiotherapy have been investigated by analysing the results of the START quality assurance programme, performing further measurements of surface dose and setting up of a Monte Carlo system to calculate dose distributions and superficial doses. Measurements in both 2D and 3D breast phantoms indicated that the average measured dose at the centre of the breast was lower than that calculated on the planning system by approximately 2%. Surface dose measurements showed good agreement between measurements and Monte Carlo calculations with values ranging from 6% of the maximum dose for a small field (5cmx5cm) at normal incidence to 37% for a large field (9cmx20cm) at an angle of 75°. Calculation on CT plans with pixel by pixel correction for the breast density indicated that monitor units are lower by an average 3% compared to a bulk density corrected plan assuming a density of 1g.cm-3. The average dose estimated from TLD in build-up caps placed on the patient surface was 0.99 of the prescribed dose. This shows that the underestimation of dose due to the assumption of unit density tissue is partially cancelled by the overestimation of dose by the algorithms. The work showed that simple calculation algorithms can be used for calculation of dose to the breast, however they are less accurate for patients who have undergone a mastectomy and in regions close to inhomogeneities where more complex algorithms are needed.

  3. Effects of Proton Radiation Dose, Dose Rate and Dose Fractionation on Hematopoietic Cells in Mice

    PubMed Central

    Ware, J. H.; Sanzari, J.; Avery, S.; Sayers, C.; Krigsfeld, G.; Nuth, M.; Wan, X. S.; Rusek, A.; Kennedy, A. R.

    2012-01-01

    The present study evaluated the acute effects of radiation dose, dose rate and fractionation as well as the energy of protons in hematopoietic cells of irradiated mice. The mice were irradiated with a single dose of 51.24 MeV protons at a dose of 2 Gy and a dose rate of 0.05–0.07 Gy/min or 1 GeV protons at doses of 0.1, 0.2, 0.5, 1, 1.5 and 2 Gy delivered in a single dose at dose rates of 0.05 or 0.5 Gy/min or in five daily dose fractions at a dose rate of 0.05 Gy/min. Sham-irradiated animals were used as controls. The results demonstrate a dose-dependent loss of white blood cells (WBCs) and lymphocytes by up to 61% and 72%, respectively, in mice irradiated with protons at doses up to 2 Gy. The results also demonstrate that the dose rate, fractionation pattern and energy of the proton radiation did not have significant effects on WBC and lymphocyte counts in the irradiated animals. These results suggest that the acute effects of proton radiation on WBC and lymphocyte counts are determined mainly by the radiation dose, with very little contribution from the dose rate (over the range of dose rates evaluated), fractionation and energy of the protons. PMID:20726731

  4. Effects of proton radiation dose, dose rate and dose fractionation on hematopoietic cells in mice

    SciTech Connect

    Ware, J.H.; Rusek, A.; Sanzari, J.; Avery, S.; Sayers, C.; Krigsfeld, G.; Nuth, M.; Wan, X.S.; Kennedy, A.R.

    2010-09-01

    The present study evaluated the acute effects of radiation dose, dose rate and fractionation as well as the energy of protons in hematopoietic cells of irradiated mice. The mice were irradiated with a single dose of 51.24 MeV protons at a dose of 2 Gy and a dose rate of 0.05-0.07 Gy/min or 1 GeV protons at doses of 0.1, 0.2, 0.5, 1, 1.5 and 2 Gy delivered in a single dose at dose rates of 0.05 or 0.5 Gy/min or in five daily dose fractions at a dose rate of 0.05 Gy/min. Sham-irradiated animals were used as controls. The results demonstrate a dose-dependent loss of white blood cells (WBCs) and lymphocytes by up to 61% and 72%, respectively, in mice irradiated with protons at doses up to 2 Gy. The results also demonstrate that the dose rate, fractionation pattern and energy of the proton radiation did not have significant effects on WBC and lymphocyte counts in the irradiated animals. These results suggest that the acute effects of proton radiation on WBC and lymphocyte counts are determined mainly by the radiation dose, with very little contribution from the dose rate (over the range of dose rates evaluated), fractionation and energy of the protons.

  5. Dose reconstruction for real-time patient-specific dose estimation in CT

    SciTech Connect

    De Man, Bruno Yin, Zhye; Wu, Mingye; FitzGerald, Paul; Kalra, Mannudeep

    2015-05-15

    Purpose: Many recent computed tomography (CT) dose reduction approaches belong to one of three categories: statistical reconstruction algorithms, efficient x-ray detectors, and optimized CT acquisition schemes with precise control over the x-ray distribution. The latter category could greatly benefit from fast and accurate methods for dose estimation, which would enable real-time patient-specific protocol optimization. Methods: The authors present a new method for volumetrically reconstructing absorbed dose on a per-voxel basis, directly from the actual CT images. The authors’ specific implementation combines a distance-driven pencil-beam approach to model the first-order x-ray interactions with a set of Gaussian convolution kernels to model the higher-order x-ray interactions. The authors performed a number of 3D simulation experiments comparing the proposed method to a Monte Carlo based ground truth. Results: The authors’ results indicate that the proposed approach offers a good trade-off between accuracy and computational efficiency. The images show a good qualitative correspondence to Monte Carlo estimates. Preliminary quantitative results show errors below 10%, except in bone regions, where the authors see a bigger model mismatch. The computational complexity is similar to that of a low-resolution filtered-backprojection algorithm. Conclusions: The authors present a method for analytic dose reconstruction in CT, similar to the techniques used in radiation therapy planning with megavoltage energies. Future work will include refinements of the proposed method to improve the accuracy as well as a more extensive validation study. The proposed method is not intended to replace methods that track individual x-ray photons, but the authors expect that it may prove useful in applications where real-time patient-specific dose estimation is required.

  6. Dose specification for 192Ir high dose rate brachytherapy in terms of dose-to-water-in-medium and dose-to-medium-in-medium

    NASA Astrophysics Data System (ADS)

    Paiva Fonseca, Gabriel; Carlsson Tedgren, Åsa; Reniers, Brigitte; Nilsson, Josef; Persson, Maria; Yoriyaz, Hélio; Verhaegen, Frank

    2015-06-01

    Dose calculation in high dose rate brachytherapy with 192Ir is usually based on the TG-43U1 protocol where all media are considered to be water. Several dose calculation algorithms have been developed that are capable of handling heterogeneities with two possibilities to report dose: dose-to-medium-in-medium (Dm,m) and dose-to-water-in-medium (Dw,m). The relation between Dm,m and Dw,m for 192Ir is the main goal of this study, in particular the dependence of Dw,m on the dose calculation approach using either large cavity theory (LCT) or small cavity theory (SCT). A head and neck case was selected due to the presence of media with a large range of atomic numbers relevant to tissues and mass densities such as air, soft tissues and bone interfaces. This case was simulated using a Monte Carlo (MC) code to score: Dm,m, Dw,m (LCT), mean photon energy and photon fluence. Dw,m (SCT) was derived from MC simulations using the ratio between the unrestricted collisional stopping power of the actual medium and water. Differences between Dm,m and Dw,m (SCT or LCT) can be negligible (<1%) for some tissues e.g. muscle and significant for other tissues with differences of up to 14% for bone. Using SCT or LCT approaches leads to differences between Dw,m (SCT) and Dw,m (LCT) up to 29% for bone and 36% for teeth. The mean photon energy distribution ranges from 222 keV up to 356 keV. However, results obtained using mean photon energies are not equivalent to the ones obtained using the full, local photon spectrum. This work concludes that it is essential that brachytherapy studies clearly report the dose quantity. It further shows that while differences between Dm,m and Dw,m (SCT) mainly depend on tissue type, differences between Dm,m and Dw,m (LCT) are, in addition, significantly dependent on the local photon energy fluence spectrum which varies with distance to implanted sources.

  7. Absorbed dose water calorimeter

    SciTech Connect

    Domen, S.R.

    1982-01-26

    An absorbed dose water calorimeter that takes advantage of the low thermal diffusivity of water and the water-imperviousness of polyethylene film. An ultra-small bead thermistor is sandwiched between two thin polyethylene films stretched between insulative supports in a water bath. The polyethylene films insulate the thermistor and its leads, the leads being run out from between the films in insulated sleeving and then to junctions to form a wheatstone bridge circuit. Convection barriers may be provided to reduce the effects of convection from the point of measurement. Controlled heating of different levels in the water bath is accomplished by electrical heater circuits provided for controlling temperature drift and providing adiabatic operation of the calorimeter. The absorbed dose is determined from the known specific heat of water and the measured temperature change.

  8. Cyclic cooling algorithm

    SciTech Connect

    Rempp, Florian; Mahler, Guenter; Michel, Mathias

    2007-09-15

    We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.

  9. Firefly algorithm with chaos

    NASA Astrophysics Data System (ADS)

    Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.

    2013-01-01

    A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.

  10. Network-Control Algorithm

    NASA Technical Reports Server (NTRS)

    Chan, Hak-Wai; Yan, Tsun-Yee

    1989-01-01

    Algorithm developed for optimal routing of packets of data along links of multilink, multinode digital communication network. Algorithm iterative and converges to cost-optimal assignment independent of initial assignment. Each node connected to other nodes through links, each containing number of two-way channels. Algorithm assigns channels according to message traffic leaving and arriving at each node. Modified to take account of different priorities among packets belonging to different users by using different delay constraints or imposing additional penalties via cost function.

  11. New stereo matching algorithm

    NASA Astrophysics Data System (ADS)

    Ahmed, Yasser A.; Afifi, Hossam; Rubino, Gerardo

    1999-05-01

    This paper present a new algorithm for stereo matching. The main idea is to decompose the original problem into independent hierarchical and more elementary problems that can be solved faster without any complicated mathematics using BBD. To achieve that, we use a new image feature called 'continuity feature' instead of classical noise. This feature can be extracted from any kind of images by a simple process and without using a searching technique. A new matching technique is proposed to match the continuity feature. The new algorithm resolves the main disadvantages of feature based stereo matching algorithms.

  12. Radiation dose reduction for coronary artery calcium scoring at 320-detector CT with adaptive iterative dose reduction 3D.

    PubMed

    Tatsugami, Fuminari; Higaki, Toru; Fukumoto, Wataru; Kaichi, Yoko; Fujioka, Chikako; Kiguchi, Masao; Yamamoto, Hideya; Kihara, Yasuki; Awai, Kazuo

    2015-06-01

    To assess the possibility of reducing the radiation dose for coronary artery calcium (CAC) scoring by using adaptive iterative dose reduction 3D (AIDR 3D) on a 320-detector CT scanner. Fifty-four patients underwent routine- and low-dose CT for CAC scoring. Low-dose CT was performed at one-third of the tube current used for routine-dose CT. Routine-dose CT was reconstructed with filtered back projection (FBP) and low-dose CT was reconstructed with AIDR 3D. We compared the calculated Agatston-, volume-, and mass scores of these images. The overall percentage difference in the Agatston-, volume-, and mass scores between routine- and low-dose CT studies was 15.9, 11.6, and 12.6%, respectively. There were no significant differences in the routine- and low-dose CT studies irrespective of the scoring algorithms applied. The CAC measurements of both imaging modalities were highly correlated with respect to the Agatston- (r = 0.996), volume- (r = 0.996), and mass score (r = 0.997; p < 0.001, all); the Bland-Altman limits of agreement scores were -37.4 to 51.4, -31.2 to 36.4 and -30.3 to 40.9%, respectively, suggesting that AIDR 3D was a good alternative for FBP. The mean effective radiation dose for routine- and low-dose CT was 2.2 and 0.7 mSv, respectively. The use of AIDR 3D made it possible to reduce the radiation dose by 67% for CAC scoring without impairing the quantification of coronary calcification.

  13. Radiation dose reduction for coronary artery calcium scoring at 320-detector CT with adaptive iterative dose reduction 3D.

    PubMed

    Tatsugami, Fuminari; Higaki, Toru; Fukumoto, Wataru; Kaichi, Yoko; Fujioka, Chikako; Kiguchi, Masao; Yamamoto, Hideya; Kihara, Yasuki; Awai, Kazuo

    2015-06-01

    To assess the possibility of reducing the radiation dose for coronary artery calcium (CAC) scoring by using adaptive iterative dose reduction 3D (AIDR 3D) on a 320-detector CT scanner. Fifty-four patients underwent routine- and low-dose CT for CAC scoring. Low-dose CT was performed at one-third of the tube current used for routine-dose CT. Routine-dose CT was reconstructed with filtered back projection (FBP) and low-dose CT was reconstructed with AIDR 3D. We compared the calculated Agatston-, volume-, and mass scores of these images. The overall percentage difference in the Agatston-, volume-, and mass scores between routine- and low-dose CT studies was 15.9, 11.6, and 12.6%, respectively. There were no significant differences in the routine- and low-dose CT studies irrespective of the scoring algorithms applied. The CAC measurements of both imaging modalities were highly correlated with respect to the Agatston- (r = 0.996), volume- (r = 0.996), and mass score (r = 0.997; p < 0.001, all); the Bland-Altman limits of agreement scores were -37.4 to 51.4, -31.2 to 36.4 and -30.3 to 40.9%, respectively, suggesting that AIDR 3D was a good alternative for FBP. The mean effective radiation dose for routine- and low-dose CT was 2.2 and 0.7 mSv, respectively. The use of AIDR 3D made it possible to reduce the radiation dose by 67% for CAC scoring without impairing the quantification of coronary calcification. PMID:25754302

  14. Estimation of the Dose and Dose Rate Effectiveness Factor

    NASA Technical Reports Server (NTRS)

    Chappell, L.; Cucinotta, F. A.

    2013-01-01

    Current models to estimate radiation risk use the Life Span Study (LSS) cohort that received high doses and high dose rates of radiation. Transferring risks from these high dose rates to the low doses and dose rates received by astronauts in space is a source of uncertainty in our risk calculations. The solid cancer models recommended by BEIR VII [1], UNSCEAR [2], and Preston et al [3] is fitted adequately by a linear dose response model, which implies that low doses and dose rates would be estimated the same as high doses and dose rates. However animal and cell experiments imply there should be curvature in the dose response curve for tumor induction. Furthermore animal experiments that directly compare acute to chronic exposures show lower increases in tumor induction than acute exposures. A dose and dose rate effectiveness factor (DDREF) has been estimated and applied to transfer risks from the high doses and dose rates of the LSS cohort to low doses and dose rates such as from missions in space. The BEIR VII committee [1] combined DDREF estimates using the LSS cohort and animal experiments using Bayesian methods for their recommendation for a DDREF value of 1.5 with uncertainty. We reexamined the animal data considered by BEIR VII and included more animal data and human chromosome aberration data to improve the estimate for DDREF. Several experiments chosen by BEIR VII were deemed inappropriate for application to human risk models of solid cancer risk. Animal tumor experiments performed by Ullrich et al [4], Alpen et al [5], and Grahn et al [6] were analyzed to estimate the DDREF. Human chromosome aberration experiments performed on a sample of astronauts within NASA were also available to estimate the DDREF. The LSS cohort results reported by BEIR VII were combined with the new radiobiology results using Bayesian methods.

  15. A novel method for 4D measurement-guided planned dose perturbation to estimate patient dose/DVH changes due to interplay

    NASA Astrophysics Data System (ADS)

    Nelms, B.; Feygelman, V.

    2013-06-01

    As IMRT/VMAT technology continues to evolve, so do the dosimetric QA methods. We present the theoretical framework for the novel planned dose perturbation algorithm. It allows not only to reconstruct the 3D volumetric doe on a patient from a measurement in a cylindrical phantom, but also to incorporate the effects of the interplay between the intrafractional organ motion and dynamic delivery. Unlike in our previous work, this 4D dose reconstruction does not require the knowledge of the TPS dose for each control point of the plan, making the method much more practical. Motion is viewed as just another source of error, accounted for by perturbing (morphing) the planned dose distribution based on the limited empirical dose from the phantom measurement. The strategy for empirical verification of the algorithm is presented as the necessary next step.

  16. Benchmarking analytical calculations of proton doses in heterogeneous matter

    SciTech Connect

    Ciangaru, George; Polf, Jerimy C.; Bues, Martin; Smith, Alfred R.

    2005-12-15

    A proton dose computational algorithm, performing an analytical superposition of infinitely narrow proton beamlets (ASPB) is introduced. The algorithm uses the standard pencil beam technique of laterally distributing the central axis broad beam doses according to the Moliere scattering theory extended to slablike varying density media. The purpose of this study was to determine the accuracy of our computational tool by comparing it with experimental and Monte Carlo (MC) simulation data as benchmarks. In the tests, parallel wide beams of protons were scattered in water phantoms containing embedded air and bone materials with simple geometrical forms and spatial dimensions of a few centimeters. For homogeneous water and bone phantoms, the proton doses we calculated with the ASPB algorithm were found very comparable to experimental and MC data. For layered bone slab inhomogeneity in water, the comparison between our analytical calculation and the MC simulation showed reasonable agreement, even when the inhomogeneity was placed at the Bragg peak depth. There also was reasonable agreement for the parallelepiped bone block inhomogeneity placed at various depths, except for cases in which the bone was located in the region of the Bragg peak, when discrepancies were as large as more than 10%. When the inhomogeneity was in the form of abutting air-bone slabs, discrepancies of as much as 8% occurred in the lateral dose profiles on the air cavity side of the phantom. Additionally, the analytical depth-dose calculations disagreed with the MC calculations within 3% of the Bragg peak dose, at the entry and midway depths in the phantom. The distal depth-dose 20%-80% fall-off widths and ranges calculated with our algorithm and the MC simulation were generally within 0.1 cm of agreement. The analytical lateral-dose profile calculations showed smaller (by less than 0.1 cm) 20%-80% penumbra widths and shorter fall-off tails than did those calculated by the MC simulations. Overall

  17. Benchmarking analytical calculations of proton doses in heterogeneous matter.

    PubMed

    Ciangaru, George; Polf, Jerimy C; Bues, Martin; Smith, Alfred R

    2005-12-01

    A proton dose computational algorithm, performing an analytical superposition of infinitely narrow proton beamlets (ASPB) is introduced. The algorithm uses the standard pencil beam technique of laterally distributing the central axis broad beam doses according to the Moliere scattering theory extended to slablike varying density media. The purpose of this study was to determine the accuracy of our computational tool by comparing it with experimental and Monte Carlo (MC) simulation data as benchmarks. In the tests, parallel wide beams of protons were scattered in water phantoms containing embedded air and bone materials with simple geometrical forms and spatial dimensions of a few centimeters. For homogeneous water and bone phantoms, the proton doses we calculated with the ASPB algorithm were found very comparable to experimental and MC data. For layered bone slab inhomogeneity in water, the comparison between our analytical calculation and the MC simulation showed reasonable agreement, even when the inhomogeneity was placed at the Bragg peak depth. There also was reasonable agreement for the parallelepiped bone block inhomogeneity placed at various depths, except for cases in which the bone was located in the region of the Bragg peak, when discrepancies were as large as more than 10%. When the inhomogeneity was in the form of abutting air-bone slabs, discrepancies of as much as 8% occurred in the lateral dose profiles on the air cavity side of the phantom. Additionally, the analytical depth-dose calculations disagreed with the MC calculations within 3% of the Bragg peak dose, at the entry and midway depths in the phantom. The distal depth-dose 20%-80% fall-off widths and ranges calculated with our algorithm and the MC simulation were generally within 0.1 cm of agreement. The analytical lateral-dose profile calculations showed smaller (by less than 0.1 cm) 20%-80% penumbra widths and shorter fall-off tails than did those calculated by the MC simulations. Overall

  18. Radiation dose to physicians’ eye lens during interventional radiology

    NASA Astrophysics Data System (ADS)

    Bahruddin, N. A.; Hashim, S.; Karim, M. K. A.; Sabarudin, A.; Ang, W. C.; Salehhon, N.; Bakar, K. A.

    2016-03-01

    The demand of interventional radiology has increased, leading to significant risk of radiation where eye lens dose assessment becomes a major concern. In this study, we investigate physicians' eye lens doses during interventional procedures. Measurement were made using TLD-100 (LiF: Mg, Ti) dosimeters and was recorded in equivalent dose at a depth of 0.07 mm, Hp(0.07). Annual Hp(0.07) and annual effective dose were estimated using workload estimation for a year and Von Boetticher algorithm. Our results showed the mean Hp(0.07) dose of 0.33 mSv and 0.20 mSv for left and right eye lens respectively. The highest estimated annual eye lens dose was 29.33 mSv per year, recorded on left eye lens during fistulogram procedure. Five physicians had exceeded 20 mSv dose limit as recommended by international commission of radiological protection (ICRP). It is suggested that frequent training and education on occupational radiation exposure are necessary to increase knowledge and awareness of the physicians’ thus reducing dose during the interventional procedure.

  19. Hanford Site Annual Report Radiological Dose Calculation Upgrade Evaluation

    SciTech Connect

    Snyder, Sandra F.

    2010-02-28

    Operations at the Hanford Site, Richland, Washington, result in the release of radioactive materials to offsite residents. Site authorities are required to estimate the dose to the maximally exposed offsite resident. Due to the very low levels of exposure at the residence, computer models, rather than environmental samples, are used to estimate exposure, intake, and dose. A DOS-based model has been used in the past (GENII version 1.485). GENII v1.485 has been updated to a Windows®-based software (GENII version 2.08). Use of the updated software will facilitate future dose evaluations, but must be demonstrated to provide results comparable to those of GENII v1.485. This report describes the GENII v1.485 and GENII v2.08 dose exposure, intake, and dose estimates for the maximally exposed offsite resident reported for calendar year 2008. The GENII v2.08 results reflect updates to implemented algorithms. No two environmental models produce the same results, as was again demonstrated in this report. The aggregated dose results from 2008 Hanford Site airborne and surface water exposure scenarios provide comparable dose results. Therefore, the GENII v2.08 software is recommended for future offsite resident dose evaluations.

  20. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  1. Automated coronary artery calcification detection on low-dose chest CT images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Cham, Matthew D.; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.

    2014-03-01

    Coronary artery calcification (CAC) measurement from low-dose CT images can be used to assess the risk of coronary artery disease. A fully automatic algorithm to detect and measure CAC from low-dose non-contrast, non-ECG-gated chest CT scans is presented. Based on the automatically detected CAC, the Agatston score (AS), mass score and volume score were computed. These were compared with scores obtained manually from standard-dose ECG-gated scans and low-dose un-gated scans of the same patient. The automatic algorithm segments the heart region based on other pre-segmented organs to provide a coronary region mask. The mitral valve and aortic valve calcification is identified and excluded. All remaining voxels greater than 180HU within the mask region are considered as CAC candidates. The heart segmentation algorithm was evaluated on 400 non-contrast cases with both low-dose and regular dose CT scans. By visual inspection, 371 (92.8%) of the segmentations were acceptable. The automated CAC detection algorithm was evaluated on 41 low-dose non-contrast CT scans. Manual markings were performed on both low-dose and standard-dose scans for these cases. Using linear regression, the correlation of the automatic AS with the standard-dose manual scores was 0.86; with the low-dose manual scores the correlation was 0.91. Standard risk categories were also computed. The automated method risk category agreed with manual markings of gated scans for 24 cases while 15 cases were 1 category off. For low-dose scans, the automatic method agreed with 33 cases while 7 cases were 1 category off.

  2. Quality assurance for radiotherapy in prostate cancer: Point dose measurements in intensity modulated fields with large dose gradients

    SciTech Connect

    Escude, Lluis . E-mail: lluis.escude@gmx.net; Linero, Dolors; Molla, Meritxell; Miralbell, Raymond

    2006-11-15

    Purpose: We aimed to evaluate an optimization algorithm designed to find the most favorable points to position an ionization chamber (IC) for quality assurance dose measurements of patients treated for prostate cancer with intensity-modulated radiotherapy (IMRT) and fields up to 10 cm x 10 cm. Methods and Materials: Three cylindrical ICs (PTW, Freiburg, Germany) were used with volumes of 0.6 cc, 0.125 cc, and 0.015 cc. Dose measurements were made in a plastic phantom (PMMA) at 287 optimized points. An algorithm was designed to search for points with the lowest dose gradient. Measurements were made also at 39 nonoptimized points. Results were normalized to a reference homogeneous field introducing a dose ratio factor, which allowed us to compare measured vs. calculated values as percentile dose ratio factor deviations {delta}F (%). A tolerance range of {delta}F (%) of {+-}3% was considered. Results: Half of the {delta}F (%) values obtained at nonoptimized points were outside the acceptable range. Values at optimized points were widely spread for the largest IC (i.e., 60% of the results outside the tolerance range), whereas for the two small-volume ICs, only 14.6% of the results were outside the tolerance interval. No differences were observed when comparing the two small ICs. Conclusions: The presented optimization algorithm is a useful tool to determine the best IC in-field position for optimal dose measurement conditions. A good agreement between calculated and measured doses can be obtained by positioning small volume chambers at carefully selected points in the field. Large chambers may be unreliable even in optimized points for IMRT fields {<=}10 cm x 10 cm.

  3. OpenEIS Algorithms

    2013-07-29

    The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.

  4. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  5. Recommendations for dose calculations of lung cancer treatment plans treated with stereotactic ablative body radiotherapy (SABR)

    NASA Astrophysics Data System (ADS)

    Devpura, S.; Siddiqui, M. S.; Chen, D.; Liu, D.; Li, H.; Kumar, S.; Gordon, J.; Ajlouni, M.; Movsas, B.; Chetty, I. J.

    2014-03-01

    The purpose of this study was to systematically evaluate dose distributions computed with 5 different dose algorithms for patients with lung cancers treated using stereotactic ablative body radiotherapy (SABR). Treatment plans for 133 lung cancer patients, initially computed with a 1D-pencil beam (equivalent-path-length, EPL-1D) algorithm, were recalculated with 4 other algorithms commissioned for treatment planning, including 3-D pencil-beam (EPL-3D), anisotropic analytical algorithm (AAA), collapsed cone convolution superposition (CCC), and Monte Carlo (MC). The plan prescription dose was 48 Gy in 4 fractions normalized to the 95% isodose line. Tumors were classified according to location: peripheral tumors surrounded by lung (lung-island, N=39), peripheral tumors attached to the rib-cage or chest wall (lung-wall, N=44), and centrally-located tumors (lung-central, N=50). Relative to the EPL-1D algorithm, PTV D95 and mean dose values computed with the other 4 algorithms were lowest for "lung-island" tumors with smallest field sizes (3-5 cm). On the other hand, the smallest differences were noted for lung-central tumors treated with largest field widths (7-10 cm). Amongst all locations, dose distribution differences were most strongly correlated with tumor size for lung-island tumors. For most cases, convolution/superposition and MC algorithms were in good agreement. Mean lung dose (MLD) values computed with the EPL-1D algorithm were highly correlated with that of the other algorithms (correlation coefficient =0.99). The MLD values were found to be ~10% lower for small lung-island tumors with the model-based (conv/superposition and MC) vs. the correction-based (pencil-beam) algorithms with the model-based algorithms predicting greater low dose spread within the lungs. This study suggests that pencil beam algorithms should be avoided for lung SABR planning. For the most challenging cases, small tumors surrounded entirely by lung tissue (lung-island type), a Monte

  6. VMATc: VMAT with constant gantry speed and dose rate

    NASA Astrophysics Data System (ADS)

    Peng, Fei; Jiang, Steve B.; Romeijn, H. Edwin; Epelman, Marina A.

    2015-04-01

    This article considers the treatment plan optimization problem for Volumetric Modulated Arc Therapy (VMAT) with constant gantry speed and dose rate (VMATc). In particular, we consider the simultaneous optimization of multi-leaf collimator leaf positions and a constant gantry speed and dose rate. We propose a heuristic framework for (approximately) solving this optimization problem that is based on hierarchical decomposition. Specifically, an iterative algorithm is used to heuristically optimize dose rate and gantry speed selection, where at every iteration a leaf position optimization subproblem is solved, also heuristically, to find a high-quality plan corresponding to a given dose rate and gantry speed. We apply our framework to clinical patient cases, and compare the resulting VMATc plans to idealized IMRT, as well as full VMAT plans. Our results suggest that VMATc is capable of producing treatment plans of comparable quality to VMAT, albeit at the expense of long computation time and generally higher total monitor units.

  7. Race influences warfarin dose changes associated with genetic factors.

    PubMed

    Limdi, Nita A; Brown, Todd M; Yan, Qi; Thigpen, Jonathan L; Shendre, Aditi; Liu, Nianjun; Hill, Charles E; Arnett, Donna K; Beasley, T Mark

    2015-07-23

    Warfarin dosing algorithms adjust for race, assigning a fixed effect size to each predictor, thereby attenuating the differential effect by race. Attenuation likely occurs in both race groups but may be more pronounced in the less-represented race group. Therefore, we evaluated whether the effect of clinical (age, body surface area [BSA], chronic kidney disease [CKD], and amiodarone use) and genetic factors (CYP2C9*2, *3, *5, *6, *11, rs12777823, VKORC1, and CYP4F2) on warfarin dose differs by race using regression analyses among 1357 patients enrolled in a prospective cohort study and compared predictive ability of race-combined vs race-stratified models. Differential effect of predictors by race was assessed using predictor-race interactions in race-combined analyses. Warfarin dose was influenced by age, BSA, CKD, amiodarone use, and CYP2C9*3 and VKORC1 variants in both races, by CYP2C9*2 and CYP4F2 variants in European Americans, and by rs12777823 in African Americans. CYP2C9*2 was associated with a lower dose only among European Americans (20.6% vs 3.0%, P < .001) and rs12777823 only among African Americans (12.3% vs 2.3%, P = .006). Although VKORC1 was associated with dose decrease in both races, the proportional decrease was higher among European Americans (28.9% vs 19.9%, P = .003) compared with African Americans. Race-stratified analysis improved dose prediction in both race groups compared with race-combined analysis. We demonstrate that the effect of predictors on warfarin dose differs by race, which may explain divergent findings reported by recent warfarin pharmacogenetic trials. We recommend that warfarin dosing algorithms should be stratified by race rather than adjusted for race.

  8. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1991-01-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source terms; environmental transport environmental monitoring data; demographics, agriculture, food habits; environmental pathways and dose estimates.

  9. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1992-02-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demography, food consumption, and agriculture; environmental pathways and dose estimates.

  10. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Cannon, S.D.; Finch, S.M.

    1992-10-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates):Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  11. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1992-01-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  12. Proton dose distribution measurements using a MOSFET detector with a simple dose-weighted correction method for LET effects.

    PubMed

    Kohno, Ryosuke; Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi

    2011-04-04

    We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth-dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high-bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L-shaped bolus. The dose reproducibility, angular dependence and depth-dose response were evaluated using a 190 MeV proton beam. Depth-output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose-weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L-shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors.

  13. Proton dose distribution measurements using a MOSFET detector with a simple dose-weighted correction method for LET effects.

    PubMed

    Kohno, Ryosuke; Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi

    2011-01-01

    We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth-dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high-bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L-shaped bolus. The dose reproducibility, angular dependence and depth-dose response were evaluated using a 190 MeV proton beam. Depth-output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose-weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L-shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors. PMID:21587191

  14. A pencil beam algorithm for helium ion beam therapy

    SciTech Connect

    Fuchs, Hermann; Stroebele, Julia; Schreiner, Thomas; Hirtl, Albert; Georg, Dietmar

    2012-11-15

    Purpose: To develop a flexible pencil beam algorithm for helium ion beam therapy. Dose distributions were calculated using the newly developed pencil beam algorithm and validated using Monte Carlo (MC) methods. Methods: The algorithm was based on the established theory of fluence weighted elemental pencil beam (PB) kernels. Using a new real-time splitting approach, a minimization routine selects the optimal shape for each sub-beam. Dose depositions along the beam path were determined using a look-up table (LUT). Data for LUT generation were derived from MC simulations in water using GATE 6.1. For materials other than water, dose depositions were calculated by the algorithm using water-equivalent depth scaling. Lateral beam spreading caused by multiple scattering has been accounted for by implementing a non-local scattering formula developed by Gottschalk. A new nuclear correction was modelled using a Voigt function and implemented by a LUT approach. Validation simulations have been performed using a phantom filled with homogeneous materials or heterogeneous slabs of up to 3 cm. The beams were incident perpendicular to the phantoms surface with initial particle energies ranging from 50 to 250 MeV/A with a total number of 10{sup 7} ions per beam. For comparison a special evaluation software was developed calculating the gamma indices for dose distributions. Results: In homogeneous phantoms, maximum range deviations between PB and MC of less than 1.1% and differences in the width of the distal energy falloff of the Bragg-Peak from 80% to 20% of less than 0.1 mm were found. Heterogeneous phantoms using layered slabs satisfied a {gamma}-index criterion of 2%/2mm of the local value except for some single voxels. For more complex phantoms using laterally arranged bone-air slabs, the {gamma}-index criterion was exceeded in some areas giving a maximum {gamma}-index of 1.75 and 4.9% of the voxels showed {gamma}-index values larger than one. The calculation precision of the

  15. Detection of low level gaseous releases and dose evaluation from continuous gamma dose measurements using a wavelet transformation technique.

    PubMed

    Paul, Sabyasachi; Rao, D D; Sarkar, P K

    2012-11-01

    Measurement of environmental dose in the vicinity of a nuclear power plant site (Tarapur, India) is carried out continuously for the years 2007-2010 and attempts have been made to quantify the additional contributions from nuclear power plants over natural background by segregating the background fluctuations from the events due to plume passage using a non-decimated wavelet approach. A conservative estimate obtained using wavelet based analysis has shown a maximum annual dose of 38 μSv in a year at 1.6 km and 4.8 μSv at 10 km from the installation. The detected events within a year are in good agreement with the month wise wind-rose profile indicating reliability of the algorithm for proper detection of an event from the continuous dose rate measurements. The results were validated with the dispersion model dose predictions using the source term from routine monitoring data and meteorological parameters.

  16. A computer simulation method for low-dose CT images by use of real high-dose images: a phantom study.

    PubMed

    Takenaga, Tomomi; Katsuragawa, Shigehiko; Goto, Makoto; Hatemura, Masahiro; Uchiyama, Yoshikazu; Shiraishi, Junji

    2016-01-01

    Practical simulations of low-dose CT images have a possibility of being helpful means for optimization of the CT exposure dose. Because current methods reported by several researchers are limited to specific vendor platforms and generally rely on raw sinogram data that are difficult to access, we have developed a new computerized scheme for producing simulated low-dose CT images from real high-dose images without use of raw sinogram data or of a particular phantom. Our computerized scheme for low-dose CT simulation was based on the addition of a simulated noise image to a real high-dose CT image reconstructed by the filtered back-projection algorithm. First, a sinogram was generated from the forward projection of a high-dose CT image. Then, an additional noise sinogram resulting from use of a reduced exposure dose was estimated from a predetermined noise model. Finally, a noise CT image was reconstructed with a predetermined filter and was added to the real high-dose CT image to create a simulated low-dose CT image. The noise power spectrum and modulation transfer function of the simulated low-dose images were very close to those of the real low-dose images. In order to confirm the feasibility of our method, we applied this method to clinical cases which were examined with the high dose initially and then followed with a low-dose CT. In conclusion, our proposed method could simulate the low-dose CT images from their real high-dose images with sufficient accuracy and could be used for determining the optimal dose setting for various clinical CT examinations.

  17. A topographic leaf-sequencing algorithm for delivering intensity modulated radiation therapy.

    PubMed

    Desai, Dharmin; Ramsey, Chester R; Breinig, Marianne; Mahan, Stephen L

    2006-08-01

    Topographic treatment is a radiation therapy delivery technique for fixed-gantry (nonrotational) treatments on a helical tomotherapy system. The intensity-modulated fields are created by moving the treatment couch relative to a fan-beam positioned at fixed gantry angles. The delivered dose distribution is controlled by moving multileaf collimator (MLC) leaves into and out of the fan beam. The purpose of this work was to develop a leaf-sequencing algorithm for creating topographic MLC sequences. Topographic delivery was modeled using the analogy of a water faucet moving over a collection of bottles. The flow rate per unit length of the water from the faucet represented the photon fluence per unit length along the width of the fan beam, the collection of bottles represented the pixels in the treatment planning fluence map, and the volume of water collected in each bottle represented the delivered fluence. The radiation fluence per unit length delivered to the target at a given position is given by the convolution of the intensity distribution per unit length over the width of the beam and the time per unit distance along the direction of travel that an MLC leaf is open. The MLC opening times for the desired dose profiles were determined using a technique based on deconvolution using a genetic algorithm. The MLC opening times were expanded in terms of a Fourier series, and a genetic algorithm was used to find the best expansion coefficients for a given dose distribution. A series of wedge shapes (15, 30, 45, and 60 deg) and "dose well" test fluence maps were created to test the algorithm's ability to generate topographic leaf sequences. The accuracy of the leaf-sequencing algorithm was measured on a helical tomotherapy system using radiographic film placed at depth in water equivalent material. The measured dose profiles were compared with the desired dose distributions. The agreement was within +/- 2% or 2 mm distance-to-agreement (DTA) in the high dose gradient

  18. A topographic leaf-sequencing algorithm for delivering intensity modulated radiation therapy

    SciTech Connect

    Desai, Dharmin; Ramsey, Chester R.; Breinig, Marianne; Mahan, Stephen L.

    2006-08-15

    Topographic treatment is a radiation therapy delivery technique for fixed-gantry (nonrotational) treatments on a helical tomotherapy system. The intensity-modulated fields are created by moving the treatment couch relative to a fan-beam positioned at fixed gantry angles. The delivered dose distribution is controlled by moving multileaf collimator (MLC) leaves into and out of the fan beam. The purpose of this work was to develop a leaf-sequencing algorithm for creating topographic MLC sequences. Topographic delivery was modeled using the analogy of a water faucet moving over a collection of bottles. The flow rate per unit length of the water from the faucet represented the photon fluence per unit length along the width of the fan beam, the collection of bottles represented the pixels in the treatment planning fluence map, and the volume of water collected in each bottle represented the delivered fluence. The radiation fluence per unit length delivered to the target at a given position is given by the convolution of the intensity distribution per unit length over the width of the beam and the time per unit distance along the direction of travel that an MLC leaf is open. The MLC opening times for the desired dose profiles were determined using a technique based on deconvolution using a genetic algorithm. The MLC opening times were expanded in terms of a Fourier series, and a genetic algorithm was used to find the best expansion coefficients for a given dose distribution. A series of wedge shapes (15, 30, 45, and 60 deg) and 'dose well' test fluence maps were created to test the algorithm's ability to generate topographic leaf sequences. The accuracy of the leaf-sequencing algorithm was measured on a helical tomotherapy system using radiographic film placed at depth in water equivalent material. The measured dose profiles were compared with the desired dose distributions. The agreement was within {+-}2% or 2 mm distance-to-agreement (DTA) in the high dose gradient

  19. Evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.

  20. History of dose specification in Brachytherapy: From Threshold Erythema Dose to Computational Dosimetry

    NASA Astrophysics Data System (ADS)

    Williamson, Jeffrey F.

    2006-09-01

    This paper briefly reviews the evolution of brachytherapy dosimetry from 1900 to the present. Dosimetric practices in brachytherapy fall into three distinct eras: During the era of biological dosimetry (1900-1938), radium pioneers could only specify Ra-226 and Rn-222 implants in terms of the mass of radium encapsulated within the implanted sources. Due to the high energy of its emitted gamma rays and the long range of its secondary electrons in air, free-air chambers could not be used to quantify the output of Ra-226 sources in terms of exposure. Biological dosimetry, most prominently the threshold erythema dose, gained currency as a means of intercomparing radium treatments with exposure-calibrated orthovoltage x-ray units. The classical dosimetry era (1940-1980) began with successful exposure standardization of Ra-226 sources by Bragg-Gray cavity chambers. Classical dose-computation algorithms, based upon 1-D buildup factor measurements and point-source superposition computational algorithms, were able to accommodate artificial radionuclides such as Co-60, Ir-192, and Cs-137. The quantitative dosimetry era (1980- ) arose in response to the increasing utilization of low energy K-capture radionuclides such as I-125 and Pd-103 for which classical approaches could not be expected to estimate accurate correct doses. This led to intensive development of both experimental (largely TLD-100 dosimetry) and Monte Carlo dosimetry techniques along with more accurate air-kerma strength standards. As a result of extensive benchmarking and intercomparison of these different methods, single-seed low-energy radionuclide dose distributions are now known with a total uncertainty of 3%-5%.

  1. Dose discrepancies in the buildup region and their impact on dose calculations for IMRT fields

    SciTech Connect

    Hsu, Shu-Hui; Moran, Jean M.; Chen Yu; Kulasekere, Ravi; Roberson, Peter L.

    2010-05-15

    Purpose: Dose accuracy in the buildup region for radiotherapy treatment planning suffers from challenges in both measurement and calculation. This study investigates the dosimetry in the buildup region at normal and oblique incidences for open and IMRT fields and assesses the quality of the treatment planning calculations. Methods: This study was divided into three parts. First, percent depth doses and profiles (for 5x5, 10x10, 20x20, and 30x30 cm{sup 2} field sizes at 0 deg., 45 deg., and 70 deg. incidences) were measured in the buildup region in Solid Water using an Attix parallel plate chamber and Kodak XV film, respectively. Second, the parameters in the empirical contamination (EC) term of the convolution/superposition (CVSP) calculation algorithm were fitted based on open field measurements. Finally, seven segmental head-and-neck IMRT fields were measured on a flat phantom geometry and compared to calculations using {gamma} and dose-gradient compensation (C) indices to evaluate the impact of residual discrepancies and to assess the adequacy of the contamination term for IMRT fields. Results: Local deviations between measurements and calculations for open fields were within 1% and 4% in the buildup region for normal and oblique incidences, respectively. The C index with 5%/1 mm criteria for IMRT fields ranged from 89% to 99% and from 96% to 98% at 2 mm and 10 cm depths, respectively. The quality of agreement in the buildup region for open and IMRT fields is comparable to that in nonbuildup regions. Conclusions: The added EC term in CVSP was determined to be adequate for both open and IMRT fields. Due to the dependence of calculation accuracy on (1) EC modeling, (2) internal convolution and density grid sizes, (3) implementation details in the algorithm, and (4) the accuracy of measurements used for treatment planning system commissioning, the authors recommend an evaluation of the accuracy of near-surface dose calculations as a part of treatment planning

  2. Dose discrepancies in the buildup region and their impact on dose calculations for IMRT fields

    PubMed Central

    Hsu, Shu-Hui; Moran, Jean M.; Chen, Yu; Kulasekere, Ravi; Roberson, Peter L.

    2010-01-01

    Purpose: Dose accuracy in the buildup region for radiotherapy treatment planning suffers from challenges in both measurement and calculation. This study investigates the dosimetry in the buildup region at normal and oblique incidences for open and IMRT fields and assesses the quality of the treatment planning calculations. Methods: This study was divided into three parts. First, percent depth doses and profiles (for 5×5, 10×10, 20×20, and 30×30 cm2 field sizes at 0°, 45°, and 70° incidences) were measured in the buildup region in Solid Water using an Attix parallel plate chamber and Kodak XV film, respectively. Second, the parameters in the empirical contamination (EC) term of the convolution∕superposition (CVSP) calculation algorithm were fitted based on open field measurements. Finally, seven segmental head-and-neck IMRT fields were measured on a flat phantom geometry and compared to calculations using γ and dose-gradient compensation (C) indices to evaluate the impact of residual discrepancies and to assess the adequacy of the contamination term for IMRT fields. Results: Local deviations between measurements and calculations for open fields were within 1% and 4% in the buildup region for normal and oblique incidences, respectively. The C index with 5%∕1 mm criteria for IMRT fields ranged from 89% to 99% and from 96% to 98% at 2 mm and 10 cm depths, respectively. The quality of agreement in the buildup region for open and IMRT fields is comparable to that in nonbuildup regions. Conclusions: The added EC term in CVSP was determined to be adequate for both open and IMRT fields. Due to the dependence of calculation accuracy on (1) EC modeling, (2) internal convolution and density grid sizes, (3) implementation details in the algorithm, and (4) the accuracy of measurements used for treatment planning system commissioning, the authors recommend an evaluation of the accuracy of near-surface dose calculations as a part of treatment planning commissioning

  3. Monte Carlo dose calculations in the treatment of a pelvis with implant and comparison with pencil-beam calculations

    SciTech Connect

    Laub, Wolfram U.; Nuesslin, Fridtjof

    2003-12-31

    In the present paper, dose distribution calculated with the Monte Carlo code EGS4 and with a pencil-beam algorithm are compared for the treatment of a pelvis with an implant. Overestimations of dose values inside the target volume by the pencil-beam algorithm of up to 10% were found, which are attributed to the underestimation of the absorption of photons by the implant. The differences in dose distributions are also expressed by comparing the tumor control probability (TCP) of the Monte Carlo dose calculations with the TCP of the pencil-beam calculations. A TCP reduction of order of 30% was found.

  4. Fast voxel and polygon ray-tracing algorithms in intensity modulated radiation therapy treatment planning

    SciTech Connect

    Fox, Christopher; Romeijn, H. Edwin; Dempsey, James F.

    2006-05-15

    We present work on combining three algorithms to improve ray-tracing efficiency in radiation therapy dose computation. The three algorithms include: An improved point-in-polygon algorithm, incremental voxel ray tracing algorithm, and stereographic projection of beamlets for voxel truncation. The point-in-polygon and incremental voxel ray-tracing algorithms have been used in computer graphics and nuclear medicine applications while the stereographic projection algorithm was developed by our group. These algorithms demonstrate significant improvements over the current standard algorithms in peer reviewed literature, i.e., the polygon and voxel ray-tracing algorithms of Siddon for voxel classification (point-in-polygon testing) and dose computation, respectively, and radius testing for voxel truncation. The presented polygon ray-tracing technique was tested on 10 intensity modulated radiation therapy (IMRT) treatment planning cases that required the classification of between 0.58 and 2.0 million voxels on a 2.5 mm isotropic dose grid into 1-4 targets and 5-14 structures represented as extruded polygons (a.k.a. Siddon prisms). Incremental voxel ray tracing and voxel truncation employing virtual stereographic projection was tested on the same IMRT treatment planning cases where voxel dose was required for 230-2400 beamlets using a finite-size pencil-beam algorithm. Between a 100 and 360 fold cpu time improvement over Siddon's method was observed for the polygon ray-tracing algorithm to perform classification of voxels for target and structure membership. Between a 2.6 and 3.1 fold reduction in cpu time over current algorithms was found for the implementation of incremental ray tracing. Additionally, voxel truncation via stereographic projection was observed to be 11-25 times faster than the radial-testing beamlet extent approach and was further improved 1.7-2.0 fold through point-classification using the method of translation over the cross product technique.

  5. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  6. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  7. Optical rate sensor algorithms

    NASA Astrophysics Data System (ADS)

    Uhde-Lacovara, Jo A.

    1989-12-01

    Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.

  8. Optical rate sensor algorithms

    NASA Technical Reports Server (NTRS)

    Uhde-Lacovara, Jo A.

    1989-01-01

    Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.

  9. Quantification of Proton Dose Calculation Accuracy in the Lung

    SciTech Connect

    Grassberger, Clemens; Daartz, Juliane; Dowdell, Stephen; Ruggieri, Thomas; Sharp, Greg; Paganetti, Harald

    2014-06-01

    Purpose: To quantify the accuracy of a clinical proton treatment planning system (TPS) as well as Monte Carlo (MC)–based dose calculation through measurements and to assess the clinical impact in a cohort of patients with tumors located in the lung. Methods and Materials: A lung phantom and ion chamber array were used to measure the dose to a plane through a tumor embedded in the lung, and to determine the distal fall-off of the proton beam. Results were compared with TPS and MC calculations. Dose distributions in 19 patients (54 fields total) were simulated using MC and compared to the TPS algorithm. Results: MC increased dose calculation accuracy in lung tissue compared with the TPS and reproduced dose measurements in the target to within ±2%. The average difference between measured and predicted dose in a plane through the center of the target was 5.6% for the TPS and 1.6% for MC. MC recalculations in patients showed a mean dose to the clinical target volume on average 3.4% lower than the TPS, exceeding 5% for small fields. For large tumors, MC also predicted consistently higher V5 and V10 to the normal lung, because of a wider lateral penumbra, which was also observed experimentally. Critical structures located distal to the target could show large deviations, although this effect was highly patient specific. Range measurements showed that MC can reduce range uncertainty by a factor of ∼2: the average (maximum) difference to the measured range was 3.9 mm (7.5 mm) for MC and 7 mm (17 mm) for the TPS in lung tissue. Conclusion: Integration of Monte Carlo dose calculation techniques into the clinic would improve treatment quality in proton therapy for lung cancer by avoiding systematic overestimation of target dose and underestimation of dose to normal lung. In addition, the ability to confidently reduce range margins would benefit all patients by potentially lowering toxicity.

  10. New Effective Multithreaded Matching Algorithms

    SciTech Connect

    Manne, Fredrik; Halappanavar, Mahantesh

    2014-05-19

    Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.

  11. Deformable Dose Reconstruction to Optimize the Planning and Delivery of Liver Cancer Radiotherapy

    NASA Astrophysics Data System (ADS)

    Velec, Michael

    The precise delivery of radiation to liver cancer patients results in improved control with higher tumor doses and minimized normal tissues doses. A margin of normal tissue around the tumor requires irradiation however to account for treatment delivery uncertainties. Daily image-guidance allows targeting of the liver, a surrogate for the tumor, to reduce geometric errors. However poor direct tumor visualization, anatomical deformation and breathing motion introduce uncertainties between the planned dose, calculated on a single pre-treatment computed tomography image, and the dose that is delivered. A novel deformable image registration algorithm based on tissue biomechanics was applied to previous liver cancer patients to track targets and surrounding organs during radiotherapy. Modeling these daily anatomic variations permitted dose accumulation, thereby improving calculations of the delivered doses. The accuracy of the algorithm to track dose was validated using imaging from a deformable, 3-dimensional dosimeter able to optically track absorbed dose. Reconstructing the delivered dose revealed that 70% of patients had substantial deviations from the initial planned dose. An alternative image-guidance technique using respiratory-correlated imaging was simulated, which reduced both the residual tumor targeting errors and the magnitude of the delivered dose deviations. A planning and delivery strategy for liver radiotherapy was then developed that minimizes the impact of breathing motion, and applied a margin to account for the impact of liver deformation during treatment. This margin is 38% smaller on average than the margin used clinically, and permitted an average dose-escalation to liver tumors of 9% for the same risk of toxicity. Simulating the delivered dose with deformable dose reconstruction demonstrated the plans with smaller margins were robust as 90% of patients' tumors received the intended dose. This strategy can be readily implemented with widely

  12. [Fixed-dose combination].

    PubMed

    Nagai, Yoshio

    2015-03-01

    Many patients with type 2 diabetes mellitus(T2DM) do not achieve satisfactory glycemic control by monotherapy alone, and often require multiple oral hypoglycemic agents (OHAs). Combining OHAs with complementary mechanisms of action is fundamental to the management of T2DM. Fixed-dose combination therapy(FDC) offers a method of simplifying complex regimens. Efficacy and tolerability appear to be similar between FDC and treatment with individual agents. In addition, FDC can enhance adherence and improved adherence may result in improved glycemic control. Four FDC agents are available in Japan: pioglitazone-glimepiride, pioglitazone-metformin, pioglitazone-alogliptin, and voglibose-mitiglinide. In this review, the advantages and disadvantages of these four combinations are identified and discussed. PMID:25812374

  13. Standardized radiological dose evaluations

    SciTech Connect

    Peterson, V.L.; Stahlnecker, E.

    1996-05-01

    Following the end of the Cold War, the mission of Rocky Flats Environmental Technology Site changed from production of nuclear weapons to cleanup. Authorization baseis documents for the facilities, primarily the Final Safety Analysis Reports, are being replaced with new ones in which accident scenarios are sorted into coarse bins of consequence and frequency, similar to the approach of DOE-STD-3011-94. Because this binning does not require high precision, a standardized approach for radiological dose evaluations is taken for all the facilities at the site. This is done through a standard calculation ``template`` for use by all safety analysts preparing the new documents. This report describes this template and its use.

  14. A Bayesian Dose-finding Design for Oncology Clinical Trials of Combinational Biological Agents

    PubMed Central

    Cai, Chunyan; Yuan, Ying; Ji, Yuan

    2013-01-01

    Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which efficacy and toxicity monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a dose-finding design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. PMID:24511160

  15. A Bayesian Dose-finding Design for Oncology Clinical Trials of Combinational Biological Agents.

    PubMed

    Cai, Chunyan; Yuan, Ying; Ji, Yuan

    2014-01-01

    Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which efficacy and toxicity monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a dose-finding design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. PMID:24511160

  16. Radiochromic film based transit dosimetry for verification of dose delivery with intensity modulated radiotherapy

    SciTech Connect

    Chung, Kwangzoo; Lee, Kiho; Shin, Dongho; Kyung Lim, Young; Byeong Lee, Se; Yoon, Myonggeun; Son, Jaeman; Yong Park, Sung

    2013-02-15

    Purpose: To evaluate the transit dose based patient specific quality assurance (QA) of intensity modulated radiation therapy (IMRT) for verification of the accuracy of dose delivered to the patient. Methods: Five IMRT plans were selected and utilized to irradiate a homogeneous plastic water phantom and an inhomogeneous anthropomorphic phantom. The transit dose distribution was measured with radiochromic film and was compared with the computed dose map on the same plane using a gamma index with a 3% dose and a 3 mm distance-to-dose agreement tolerance limit. Results: While the average gamma index for comparisons of dose distributions was less than one for 98.9% of all pixels from the transit dose with the homogeneous phantom, the passing rate was reduced to 95.0% for the transit dose with the inhomogeneous phantom. Transit doses due to a 5 mm setup error may cause up to a 50% failure rate of the gamma index. Conclusions: Transit dose based IMRT QA may be superior to the traditional QA method since the former can show whether the inhomogeneity correction algorithm from TPS is accurate. In addition, transit dose based IMRT QA can be used to verify the accuracy of the dose delivered to the patient during treatment by revealing significant increases in the failure rate of the gamma index resulting from errors in patient positioning during treatment.

  17. Automatic design of decision-tree algorithms with evolutionary algorithms.

    PubMed

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  18. Dose Calibration of the ISS-RAD Fast Neutron Detector

    NASA Technical Reports Server (NTRS)

    Zeitlin, C.

    2015-01-01

    The ISS-RAD instrument has been fabricated by Southwest Research Institute and delivered to NASA for flight to the ISS in late 2015 or early 2016. ISS-RAD is essentially two instruments that share a common interface to ISS. The two instruments are the Charged Particle Detector (CPD), which is very similar to the MSL-RAD detector on Mars, and the Fast Neutron Detector (FND), which is a boron-loaded plastic scintillator with readout optimized for the 0.5 to 10 MeV energy range. As the FND is completely new, it has been necessary to develop methodology to allow it to be used to measure the neutron dose and dose equivalent. This talk will focus on the methods developed and their implementation using calibration data obtained in quasi-monoenergetic (QMN) neutron fields at the PTB facility in Braunschweig, Germany. The QMN data allow us to determine an approximate response function, from which we estimate dose and dose equivalent contributions per detected neutron as a function of the pulse height. We refer to these as the "pSv per count" curves for dose equivalent and the "pGy per count" curves for dose. The FND is required to provide a dose equivalent measurement with an accuracy of ?10% of the known value in a calibrated AmBe field. Four variants of the analysis method were developed, corresponding to two different approximations of the pSv per count curve, and two different implementations, one for real-time analysis onboard ISS and one for ground analysis. We will show that the preferred method, when applied in either real-time or ground analysis, yields good accuracy for the AmBe field. We find that the real-time algorithm is more susceptible to chance-coincidence background than is the algorithm used in ground analysis, so that the best estimates will come from the latter.

  19. [Examination of Visual Effect in Low-dose Cerebral CT Perfusion Phantom Image Using Iterative Reconstruction].

    PubMed

    Ohmura, Tomomi; Lee, Yongbum; Takahashi, Noriyuki; Sato, Yuichiro; Ishida, Takato; Toyoshima, Hideto

    2015-11-01

    CT perfusion (CTP) is obtained cerebrovascular circulation image for assessment of stroke patients; however, at the expense of increased radiation dose by dynamic scan. Iterative reconstruction (IR) method is possible to decrease image noise, it has the potential to reduce radiation dose. The purpose of this study is to assess the visual effect of IR method by using a digital perfusion phantom. The digital perfusion phantom was created by reconstructed filtered back projection (FBP) method and IR method CT images that had five exposure doses. Various exposure dose cerebral blood flow (CBF) images were derived from deconvolution algorithm. Contrast-to-noise ratio (CNR) and visual assessment were compared among the various exposure dose and each reconstructions. Result of low exposure dose with IR method showed, compared with FBP method, high CNR in severe ischemic area, and visual assessment was significantly improvement. IR method is useful for improving image quality of low-dose CTP. PMID:26596197

  20. On-board predicting algorithm of radiation exposure for the International Space Station radiation monitoring system

    NASA Astrophysics Data System (ADS)

    Benghin, V. V.

    2008-02-01

    Radiation monitoring system (RMS) has worked on-board the International Space Station (ISS) practically continuously beginning from August 2001. In June 2005, the RMS software was updated. New RMS software detects radiation environment worsening due to solar proton events and informs the crew about this. The algorithm of the on-board radiation environment predict is a part of the new software. This algorithm reveals dose rate increments on high-latitude parts of ISS orbit and calculates estimations of time intervals and dose rate values for ulterior crossings of high-latitude areas. A brief description of the on-board radiation exposure-predict algorithm is presented.

  1. Contact solution algorithms

    NASA Technical Reports Server (NTRS)

    Tielking, John T.

    1989-01-01

    Two algorithms for obtaining static contact solutions are described in this presentation. Although they were derived for contact problems involving specific structures (a tire and a solid rubber cylinder), they are sufficiently general to be applied to other shell-of-revolution and solid-body contact problems. The shell-of-revolution contact algorithm is a method of obtaining a point load influence coefficient matrix for the portion of shell surface that is expected to carry a contact load. If the shell is sufficiently linear with respect to contact loading, a single influence coefficient matrix can be used to obtain a good approximation of the contact pressure distribution. Otherwise, the matrix will be updated to reflect nonlinear load-deflection behavior. The solid-body contact algorithm utilizes a Lagrange multiplier to include the contact constraint in a potential energy functional. The solution is found by applying the principle of minimum potential energy. The Lagrange multiplier is identified as the contact load resultant for a specific deflection. At present, only frictionless contact solutions have been obtained with these algorithms. A sliding tread element has been developed to calculate friction shear force in the contact region of the rolling shell-of-revolution tire model.

  2. Comprehensive eye evaluation algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  3. PSC algorithm description

    NASA Technical Reports Server (NTRS)

    Nobbs, Steven G.

    1995-01-01

    An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.

  4. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  5. Quantum gate decomposition algorithms.

    SciTech Connect

    Slepoy, Alexander

    2006-07-01

    Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.

  6. Robotic Follow Algorithm

    SciTech Connect

    2005-03-30

    The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.

  7. Data Structures and Algorithms.

    ERIC Educational Resources Information Center

    Wirth, Niklaus

    1984-01-01

    Built-in data structures are the registers and memory words where binary values are stored; hard-wired algorithms are the fixed rules, embodied in electronic logic circuits, by which stored data are interpreted as instructions to be executed. Various topics related to these two basic elements of every computer program are discussed. (JN)

  8. The Lure of Algorithms

    ERIC Educational Resources Information Center

    Drake, Michael

    2011-01-01

    One debate that periodically arises in mathematics education is the issue of how to teach calculation more effectively. "Modern" approaches seem to initially favour mental calculation, informal methods, and the development of understanding before introducing written forms, while traditionalists tend to champion particular algorithms. The debate is…

  9. Dose differences in intensity-modulated radiotherapy plans calculated with pencil beam and Monte Carlo for lung SBRT.

    PubMed

    Liu, Han; Zhuang, Tingliang; Stephans, Kevin; Videtic, Gregory; Raithel, Stephen; Djemil, Toufik; Xia, Ping

    2015-11-08

    For patients with medically inoperable early-stage non-small cell lung cancer (NSCLC) treated with stereotactic body radiation therapy, early treatment plans were based on a simpler dose calculation algorithm, the pencil beam (PB) calculation. Because these patients had the longest treatment follow-up, identifying dose differences between the PB calculated dose and Monte Carlo calculated dose is clinically important for understanding of treatment outcomes. Previous studies found significant dose differences between the PB dose calculation and more accurate dose calculation algorithms, such as convolution-based or Monte Carlo (MC), mostly for three-dimensional conformal radiotherapy (3D CRT) plans. The aim of this study is to investigate whether these observed dose differences also exist for intensity-modulated radiotherapy (IMRT) plans for both centrally and peripherally located tumors. Seventy patients (35 central and 35 peripheral) were retrospectively selected for this study. The clinical IMRT plans that were initially calculated with the PB algorithm were recalculated with the MC algorithm. Among these paired plans, dosimetric parameters were compared for the targets and critical organs. When compared to MC calculation, PB calculation overestimated doses to the planning target volumes (PTVs) of central and peripheral tumors with different magnitudes. The doses to 95% of the central and peripheral PTVs were overestimated by 9.7% ± 5.6% and 12.0% ± 7.3%, respectively. This dose overestimation did not affect doses to the critical organs, such as the spinal cord and lung. In conclusion, for NSCLC treated with IMRT, dose differences between the PB and MC calculations were different from that of 3D CRT. No significant dose differences in critical organs were observed between the two calculations.

  10. Dose differences in intensity-modulated radiotherapy plans calculated with pencil beam and Monte Carlo for lung SBRT.

    PubMed

    Liu, Han; Zhuang, Tingliang; Stephans, Kevin; Videtic, Gregory; Raithel, Stephen; Djemil, Toufik; Xia, Ping

    2015-01-01

    For patients with medically inoperable early-stage non-small cell lung cancer (NSCLC) treated with stereotactic body radiation therapy, early treatment plans were based on a simpler dose calculation algorithm, the pencil beam (PB) calculation. Because these patients had the longest treatment follow-up, identifying dose differences between the PB calculated dose and Monte Carlo calculated dose is clinically important for understanding of treatment outcomes. Previous studies found significant dose differences between the PB dose calculation and more accurate dose calculation algorithms, such as convolution-based or Monte Carlo (MC), mostly for three-dimensional conformal radiotherapy (3D CRT) plans. The aim of this study is to investigate whether these observed dose differences also exist for intensity-modulated radiotherapy (IMRT) plans for both centrally and peripherally located tumors. Seventy patients (35 central and 35 peripheral) were retrospectively selected for this study. The clinical IMRT plans that were initially calculated with the PB algorithm were recalculated with the MC algorithm. Among these paired plans, dosimetric parameters were compared for the targets and critical organs. When compared to MC calculation, PB calculation overestimated doses to the planning target volumes (PTVs) of central and peripheral tumors with different magnitudes. The doses to 95% of the central and peripheral PTVs were overestimated by 9.7% ± 5.6% and 12.0% ± 7.3%, respectively. This dose overestimation did not affect doses to the critical organs, such as the spinal cord and lung. In conclusion, for NSCLC treated with IMRT, dose differences between the PB and MC calculations were different from that of 3D CRT. No significant dose differences in critical organs were observed between the two calculations. PMID:26699560

  11. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  12. Comparison of different dose reduction system in computed tomography for orthodontic applications

    PubMed Central

    FANUCCI, E.; FIASCHETTI, V.; OTTRIA, L.; MATALONI, M; ACAMPORA, V.; LIONE, R.; BARLATTANI, A.; SIMONETTI, G.

    2011-01-01

    SUMMARY To correlate different CT system: MSCT (multislice computed tomography) with different acquisition parameters (100KV, 80KV), different reconstruction algorithm (ASIR) and CBCT (cone beam computed tomography) examination in terms of absorbed X-ray dose and diagnostic accuracy. 80 KV protocols compared with 100 KV protocols resulted in reduced total radiation dose without relevant loss of diagnostic image information and quality. CBCT protocols compared with 80 KV MSCT protocols resulted in reduced total radiation dose but loss of diagnostic image information and quality although no so relevant. In addition the new system applies to equipment ASIR applicable on MSCT allows 50% of the dose without compromising image quality. PMID:23285397

  13. Dose refinement. ARAC's role

    SciTech Connect

    Ellis, J. S.; Sullivan, T. J.; Baskett, R. L.

    1998-06-01

    The Atmospheric Release Advisory Capability (ARAC), located at the Lawrence Livermore National Laboratory, since the late 1970's has been involved in assessing consequences from nuclear and other hazardous material releases into the atmosphere. ARAC's primary role has been emergency response. However, after the emergency phase, there is still a significant role for dispersion modeling. This work usually involves refining the source term and, hence, the dose to the populations affected as additional information becomes available in the form of source term estimates release rates, mix of material, and release geometry and any measurements from passage of the plume and deposition on the ground. Many of the ARAC responses have been documented elsewhere. 1 Some of the more notable radiological releases that ARAC has participated in the post-emergency phase have been the 1979 Three Mile Island nuclear power plant (NPP) accident outside Harrisburg, PA, the 1986 Chernobyl NPP accident in the Ukraine, and the 1996 Japan Tokai nuclear processing plant explosion. ARAC has also done post-emergency phase analyses for the 1978 Russian satellite COSMOS 954 reentry and subsequent partial burn up of its on board nuclear reactor depositing radioactive materials on the ground in Canada, the 1986 uranium hexafluoride spill in Gore, OK, the 1993 Russian Tomsk-7 nuclear waste tank explosion, and lesser releases of mostly tritium. In addition, ARAC has performed a key role in the contingency planning for possible accidental releases during the launch of spacecraft with radioisotope thermoelectric generators (RTGs) on board (i.e. Galileo, Ulysses, Mars-Pathfinder, and Cassini), and routinely exercises with the Federal Radiological Monitoring and Assessment Center (FRMAC) in preparation for offsite consequences of radiological releases from NPPs and nuclear weapon accidents or incidents. Several accident post-emergency phase assessments are discussed in this paper in order to illustrate

  14. Fast motion-including dose error reconstruction for VMAT with and without MLC tracking.

    PubMed

    Ravkilde, Thomas; Keall, Paul J; Grau, Cai; Høyer, Morten; Poulsen, Per R

    2014-12-01

    Multileaf collimator (MLC) tracking is a promising and clinically emerging treatment modality for radiotherapy of mobile tumours. Still, new quality assurance (QA) methods are warranted to safely introduce MLC tracking in the clinic. The purpose of this study was to create and experimentally validate a simple model for fast motion-including dose error reconstruction applicable to intrafractional QA of MLC tracking treatments of moving targets.MLC tracking experiments were performed on a standard linear accelerator with prototype MLC tracking software guided by an electromagnetic transponder system. A three-axis motion stage reproduced eight representative tumour trajectories; four lung and four prostate. Low and high modulation 6 MV single-arc volumetric modulated arc therapy treatment plans were delivered for each trajectory with and without MLC tracking, as well as without motion for reference. Temporally resolved doses were measured during all treatments using a biplanar dosimeter. Offline, the dose delivered to each of 1069 diodes in the dosimeter was reconstructed with 500 ms temporal resolution by a motion-including pencil beam convolution algorithm developed in-house. The accuracy of the algorithm for reconstruction of dose and motion-induced dose errors throughout the tracking and non-tracking beam deliveries was quantified. Doses were reconstructed with a mean dose difference relative to the measurements of-0.5% (5.5% standard deviation) for cumulative dose. More importantly, the root-mean-square deviation between reconstructed and measured motion-induced 3%/3 mm γ failure rates (dose error) was 2.6%. The mean computation time for each calculation of dose and dose error was 295 ms. The motion-including dose reconstruction allows accurate temporal and spatial pinpointing of errors in absorbed dose and is adequately fast to be feasible for online use. An online implementation could allow treatment intervention in case of erroneous dose delivery in both

  15. A convolution-superposition dose calculation engine for GPUs

    SciTech Connect

    Hissoiny, Sami; Ozell, Benoit; Despres, Philippe

    2010-03-15

    Purpose: Graphic processing units (GPUs) are increasingly used for scientific applications, where their parallel architecture and unprecedented computing power density can be exploited to accelerate calculations. In this paper, a new GPU implementation of a convolution/superposition (CS) algorithm is presented. Methods: This new GPU implementation has been designed from the ground-up to use the graphics card's strengths and to avoid its weaknesses. The CS GPU algorithm takes into account beam hardening, off-axis softening, kernel tilting, and relies heavily on raytracing through patient imaging data. Implementation details are reported as well as a multi-GPU solution. Results: An overall single-GPU acceleration factor of 908x was achieved when compared to a nonoptimized version of the CS algorithm implemented in PlanUNC in single threaded central processing unit (CPU) mode, resulting in approximatively 2.8 s per beam for a 3D dose computation on a 0.4 cm grid. A comparison to an established commercial system leads to an acceleration factor of approximately 29x or 0.58 versus 16.6 s per beam in single threaded mode. An acceleration factor of 46x has been obtained for the total energy released per mass (TERMA) calculation and a 943x acceleration factor for the CS calculation compared to PlanUNC. Dose distributions also have been obtained for a simple water-lung phantom to verify that the implementation gives accurate results. Conclusions: These results suggest that GPUs are an attractive solution for radiation therapy applications and that careful design, taking the GPU architecture into account, is critical in obtaining significant acceleration factors. These results potentially can have a significant impact on complex dose delivery techniques requiring intensive dose calculations such as intensity-modulated radiation therapy (IMRT) and arc therapy. They also are relevant for adaptive radiation therapy where dose results must be obtained rapidly.

  16. Analytical modelling of regional radiotherapy dose response of lung

    NASA Astrophysics Data System (ADS)

    Lee, Sangkyu; Stroian, Gabriela; Kopek, Neil; AlBahhar, Mahmood; Seuntjens, Jan; El Naqa, Issam

    2012-06-01

    Knowledge of the dose-response of radiation-induced lung disease (RILD) is necessary for optimization of radiotherapy (RT) treatment plans involving thoracic cavity irradiation. This study models the time-dependent relationship between local radiation dose and post-treatment lung tissue damage measured by computed tomography (CT) imaging. Fifty-eight follow-up diagnostic CT scans from 21 non-small-cell lung cancer patients were examined. The extent of RILD was segmented on the follow-up CT images based on the increase of physical density relative to the pre-treatment CT image. The segmented RILD was locally correlated with dose distribution calculated by analytical anisotropic algorithm and the Monte Carlo method to generate the corresponding dose-response curves. The Lyman-Kutcher-Burman (LKB) model was fit to the dose-response curves at six post-RT time periods, and temporal change in the LKB parameters was recorded. In this study, we observed significant correlation between the probability of lung tissue damage and the local dose for 96% of the follow-up studies. Dose-injury correlation at the first three months after RT was significantly different from later follow-up periods in terms of steepness and threshold dose as estimated from the LKB model. Dependence of dose response on superior-inferior tumour position was also observed. The time-dependent analytical modelling of RILD might provide better understanding of the long-term behaviour of the disease and could potentially be applied to improve inverse treatment planning optimization.

  17. SU-F-19A-10: Recalculation and Reporting Clinical HDR 192-Ir Head and Neck Dose Distributions Using Model Based Dose Calculation

    SciTech Connect

    Carlsson Tedgren, A; Persson, M; Nilsson, J

    2014-06-15

    Purpose: To retrospectively re-calculate dose distributions for selected head and neck cancer patients, earlier treated with HDR 192Ir brachytherapy, using Monte Carlo (MC) simulations and compare results to distributions from the planning system derived using TG43 formalism. To study differences between dose to medium (as obtained with the MC code) and dose to water in medium as obtained through (1) ratios of stopping powers and (2) ratios of mass energy absorption coefficients between water and medium. Methods: The MC code Algebra was used to calculate dose distributions according to earlier actual treatment plans using anonymized plan data and CT images in DICOM format. Ratios of stopping power and mass energy absorption coefficients for water with various media obtained from 192-Ir spectra were used in toggling between dose to water and dose to media. Results: Differences between initial planned TG43 dose distributions and the doses to media calculated by MC are insignificant in the target volume. Differences are moderate (within 4–5 % at distances of 3–4 cm) but increase with distance and are most notable in bone and at the patient surface. Differences between dose to water and dose to medium are within 1-2% when using mass energy absorption coefficients to toggle between the two quantities but increase to above 10% for bone using stopping power ratios. Conclusion: MC predicts target doses for head and neck cancer patients in close agreement with TG43. MC yields improved dose estimations outside the target where a larger fraction of dose is from scattered photons. It is important with awareness and a clear reporting of absorbed dose values in using model based algorithms. Differences in bone media can exceed 10% depending on how dose to water in medium is defined.

  18. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  19. Dose to medium versus dose to water as an estimator of dose to sensitive skeletal tissue

    NASA Astrophysics Data System (ADS)

    Walters, B. R. B.; Kramer, R.; Kawrakow, I.

    2010-08-01

    The purpose of this study is to determine whether dose to medium, Dm, or dose to water, Dw, provides a better estimate of the dose to the radiosensitive red bone marrow (RBM) and bone surface cells (BSC) in spongiosa, or cancellous bone. This is addressed in the larger context of the ongoing debate over whether Dm or Dw should be specified in Monte Carlo calculated radiotherapy treatment plans. The study uses voxelized, virtual human phantoms, FAX06/MAX06 (female/male), incorporated into an EGSnrc Monte Carlo code to perform Monte Carlo dose calculations during simulated irradiation by a 6 MV photon beam from an Elekta SL25 accelerator. Head and neck, chest and pelvis irradiations are studied. FAX06/MAX06 include precise modelling of spongiosa based on µCT images, allowing dose to RBM and BSC to be resolved from the dose to bone. Modifications to the FAX06/MAX06 user codes are required to score Dw and Dm in spongiosa. Dose uncertainties of ~1% (BSC, RBM) or ~0.5% (Dm, Dw) are obtained after up to 5 days of simulations on 88 CPUs. Clinically significant differences (>5%) between Dm and Dw are found only in cranial spongiosa, where the volume fraction of trabecular bone (TBVF) is high (55%). However, for spongiosa locations where there is any significant difference between Dm and Dw, comparisons of differential dose volume histograms (DVHs) and average doses show that Dw provides a better overall estimate of dose to RBM and BSC. For example, in cranial spongiosa the average Dm underestimates the average dose to sensitive tissue by at least 5%, while average Dw is within ~1% of the average dose to sensitive tissue. Thus, it is better to specify Dw than Dm in Monte Carlo treatment plans, since Dw provides a better estimate of dose to sensitive tissue in bone, the only location where the difference is likely to be clinically significant.

  20. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1992-06-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is being managed and conducted by the Battelle Pacific Northwest Laboratories under contract with the Centers for Disease Control. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demography, food consumption, and agriculture; environmental pathways and dose estimates.

  1. Psychotropic dose equivalence in Japan.

    PubMed

    Inada, Toshiya; Inagaki, Ataru

    2015-08-01

    Psychotropic dose equivalence is an important concept when estimating the approximate psychotropic doses patients receive, and deciding on the approximate titration dose when switching from one psychotropic agent to another. It is also useful from a research viewpoint when defining and extracting specific subgroups of subjects. Unification of various agents into a single standard agent facilitates easier analytical comparisons. On the basis of differences in psychopharmacological prescription features, those of available psychotropic agents and their approved doses, and racial differences between Japan and other countries, psychotropic dose equivalency tables designed specifically for Japanese patients have been widely used in Japan since 1998. Here we introduce dose equivalency tables for: (i) antipsychotics; (ii) antiparkinsonian agents; (iii) antidepressants; and (iv) anxiolytics, sedatives and hypnotics available in Japan. Equivalent doses for the therapeutic effects of individual psychotropic compounds were determined principally on the basis of randomized controlled trials conducted in Japan and consensus among dose equivalency tables reported previously by psychopharmacological experts. As these tables are intended to merely suggest approximate standard values, physicians should use them with discretion. Updated information of psychotropic dose equivalence in Japan is available at http://www.jsprs.org/en/equivalence.tables/. [Correction added on 8 July 2015, after first online publication: A link to the updated information has been added.].

  2. Reactive Collision Avoidance Algorithm

    NASA Technical Reports Server (NTRS)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  3. Survey of clinical doses from computed tomography examinations in the Canadian province of Manitoba.

    PubMed

    A Elbakri, Idris; D C Kirkpatrick, Iain

    2013-12-01

    The purpose of this study was to document CT doses for common CT examinations performed throughout the province of Manitoba. Survey forms were sent out to all provincial CT sites. Thirteen out of sixteen (81 %) sites participated. The authors assessed scans of the brain, routine abdomen-pelvis, routine chest, sinuses, lumbar spine, low-dose lung nodule studies, CT pulmonary angiograms, CT KUBs, CT colonographies and combination chest-abdomen-pelvis exams. Sites recorded scanner model, protocol techniques and patient and dose data for 100 consecutive patients who were scanned with any of the aforementioned examinations. Mean effective doses and standard deviations for the province and for individual scanners were computed. The Kruskal-Wallis test was used to compare the variability of effective doses amongst scanners. The t test was used to compare doses and their provincial ranges between newer and older scanners and scanners that used dose saving tools and those that did not. Abdomen-pelvis, chest and brain scans accounted for over 70 % of scans. Their mean effective doses were 18.0 ± 6.7, 13.2 ± 6.4 and 3.0 ± 1.0 mSv, respectively. Variations in doses amongst scanners were statistically significant. Most examinations were performed at 120 kVp, and no lower kVp was used. Dose variations due to scanner age and use of dose saving tools were not statistically significant. Clinical CT doses in Manitoba are broadly similar to but higher than those reported in other Canadian provinces. Results suggest that further dose reduction can be achieved by modifying scanning techniques, such as using lower kVp. Wide variation in doses amongst different scanners suggests that standardisation of scanning protocols can reduce patient dose. New technological advances, such as dose-reduction software algorithms, can be adopted to reduce patient dose.

  4. From AAA to Acuros XB-clinical implications of selecting either Acuros XB dose-to-water or dose-to-medium.

    PubMed

    Zifodya, Jackson M; Challens, Cameron H C; Hsieh, Wen-Long

    2016-06-01

    When implementing Acuros XB (AXB) as a substitute for anisotropic analytic algorithm (AAA) in the Eclipse Treatment Planning System, one is faced with a dilemma of reporting either dose to medium, AXB-Dm or dose to water, AXB-Dw. To assist with decision making on selecting either AXB-Dm or AXB-Dw for dose reporting, a retrospective study of treated patients for head & neck (H&N), prostate, breast and lung is presented. Ten patients, previously treated using AAA plans, were selected for each site and re-planned with AXB-Dm and AXB-Dw. Re-planning was done with fixed monitor units (MU) as well as non-fixed MUs. Dose volume histograms (DVH) of targets and organs at risk (OAR), were analyzed in conjunction with ICRU-83 recommended dose reporting metrics. Additionally, comparisons of plan homogeneity indices (HI) and MUs were done to further highlight the differences between the algorithms. Results showed that, on average AAA overestimated dose to the target volume and OARs by less than 2.0 %. Comparisons between AXB-Dw and AXB-Dm, for all sites, also showed overall dose differences to be small (<1.5 %). However, in non-water biological media, dose differences between AXB-Dw and AXB-Dm, as large as 4.6 % were observed. AXB-Dw also tended to have unexpectedly high 3D maximum dose values (>135 % of prescription dose) for target volumes with high density materials. Homogeneity indices showed that AAA planning and optimization templates would need to be adjusted only for the H&N and Lung sites. MU comparison showed insignificant differences between AXB-Dw relative to AAA and between AXB-Dw relative to AXB-Dm. However AXB-Dm MUs relative to AAA, showed an average difference of about 1.3 % signifying an underdosage by AAA. In conclusion, when dose is reported as AXB-Dw, the effect that high density structures in the PTV has on the dose distribution should be carefully considered. As the results show overall small dose differences between the algorithms, when

  5. From AAA to Acuros XB-clinical implications of selecting either Acuros XB dose-to-water or dose-to-medium.

    PubMed

    Zifodya, Jackson M; Challens, Cameron H C; Hsieh, Wen-Long

    2016-06-01

    When implementing Acuros XB (AXB) as a substitute for anisotropic analytic algorithm (AAA) in the Eclipse Treatment Planning System, one is faced with a dilemma of reporting either dose to medium, AXB-Dm or dose to water, AXB-Dw. To assist with decision making on selecting either AXB-Dm or AXB-Dw for dose reporting, a retrospective study of treated patients for head & neck (H&N), prostate, breast and lung is presented. Ten patients, previously treated using AAA plans, were selected for each site and re-planned with AXB-Dm and AXB-Dw. Re-planning was done with fixed monitor units (MU) as well as non-fixed MUs. Dose volume histograms (DVH) of targets and organs at risk (OAR), were analyzed in conjunction with ICRU-83 recommended dose reporting metrics. Additionally, comparisons of plan homogeneity indices (HI) and MUs were done to further highlight the differences between the algorithms. Results showed that, on average AAA overestimated dose to the target volume and OARs by less than 2.0 %. Comparisons between AXB-Dw and AXB-Dm, for all sites, also showed overall dose differences to be small (<1.5 %). However, in non-water biological media, dose differences between AXB-Dw and AXB-Dm, as large as 4.6 % were observed. AXB-Dw also tended to have unexpectedly high 3D maximum dose values (>135 % of prescription dose) for target volumes with high density materials. Homogeneity indices showed that AAA planning and optimization templates would need to be adjusted only for the H&N and Lung sites. MU comparison showed insignificant differences between AXB-Dw relative to AAA and between AXB-Dw relative to AXB-Dm. However AXB-Dm MUs relative to AAA, showed an average difference of about 1.3 % signifying an underdosage by AAA. In conclusion, when dose is reported as AXB-Dw, the effect that high density structures in the PTV has on the dose distribution should be carefully considered. As the results show overall small dose differences between the algorithms, when

  6. Analytical probabilistic proton dose calculation and range uncertainties

    NASA Astrophysics Data System (ADS)

    Bangert, M.; Hennig, P.; Oelfke, U.

    2014-03-01

    We introduce the concept of analytical probabilistic modeling (APM) to calculate the mean and the standard deviation of intensity-modulated proton dose distributions under the influence of range uncertainties in closed form. For APM, range uncertainties are modeled with a multivariate Normal distribution p(z) over the radiological depths z. A pencil beam algorithm that parameterizes the proton depth dose d(z) with a weighted superposition of ten Gaussians is used. Hence, the integrals ∫ dz p(z) d(z) and ∫ dz p(z) d(z)2 required for the calculation of the expected value and standard deviation of the dose remain analytically tractable and can be efficiently evaluated. The means μk, widths δk, and weights ωk of the Gaussian components parameterizing the depth dose curves are found with least squares fits for all available proton ranges. We observe less than 0.3% average deviation of the Gaussian parameterizations from the original proton depth dose curves. Consequently, APM yields high accuracy estimates for the expected value and standard deviation of intensity-modulated proton dose distributions for two dimensional test cases. APM can accommodate arbitrary correlation models and account for the different nature of random and systematic errors in fractionated radiation therapy. Beneficial applications of APM in robust planning are feasible.

  7. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  8. Algorithm Visualization System for Teaching Spatial Data Algorithms

    ERIC Educational Resources Information Center

    Nikander, Jussi; Helminen, Juha; Korhonen, Ari

    2010-01-01

    TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…

  9. Practical implementation of a collapsed cone convolution algorithm for a radiation treatment planning system

    NASA Astrophysics Data System (ADS)

    Cho, Woong; Suh, Tae-Suk; Park, Jeong-Hoon; Xing, Lei; Lee, Jeong-Woo

    2012-12-01

    A collapsed cone convolution algorithm was applied to a treatment planning system for the calculation of dose distributions. The distribution of beam fluences was determined using a three-source model by considering the source strengths of the primary beam, the beam scattered from the primary collimators, and an extra beam scattered from extra structures in the gantry head of the radiotherapy treatment machine. The distribution of the total energy released per unit mass (TERMA) was calculated from the distribution of the fluence by considering several physical effects such as the emission of poly-energetic photon spectra, the attenuation of the beam fluence in a medium, the horn effect, the beam-softening effect, and beam transmission through collimators or multi-leaf collimators. The distribution of the doses was calculated by using the convolution of the distribution of the TERMA and the poly-energetic kernel. The distribution of the kernel was approximated to several tens of collapsed cone lines to express the energies transferred by the electrons that originated from the interactions between the photons and the medium. The implemented algorithm was validated by comparing the calculated percentage depth doses (PDDs) and dose profiles with the measured PDDs and relevant profiles. In addition, the dose distribution for an irregular-shaped radiation field was verified by comparing the calculated doses with the measured doses obtained via EDR2 film dosimetry and with the calculated doses obtained using a different treatment planning system based on the pencil beam algorithm (Eclipse, Varian, Palo Alto, USA). The majority of the calculated doses for the PDDs, the profiles, and the irregular-shaped field showed good agreement with the measured doses to within a 2% dose difference, except in the build-up regions. The implemented algorithm was proven to be efficient and accurate for clinical purposes in radiation therapy, and it was found to be easily implementable in

  10. Efficient iterative image reconstruction algorithm for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  11. Spatial dose distribution in polymer pipes exposed to electron beam

    NASA Astrophysics Data System (ADS)

    Ponomarev, Alexander V.

    2016-01-01

    Non-uniform distribution of absorbed dose in cross-section of any polymeric pipe is caused by non-uniform thickness of polymer layer penetrated by unidirectional electron beam. The special computer program was created for a prompt estimation of dose non-uniformity in pipes subjected to an irradiation by 1-10 MeV electron beam. Irrespective of electron beam energy, the local doses absorbed in the bulk of a material can be calculated on the basis of the universal correlations offered in the work. Incomplete deceleration of electrons in shallow layers of a polymer was taken into account. Possibilities for wide variation of pipe sizes, polymer properties and irradiation modes were provided by the algorithm. Both the unilateral and multilateral irradiation can be simulated.

  12. TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations

    SciTech Connect

    Schuemann, J; Grassberger, C; Paganetti, H; Dowdell, S

    2014-06-15

    Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50) were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend

  13. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1987-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.

  14. Algorithms, games, and evolution.

    PubMed

    Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh

    2014-07-22

    Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: "What algorithm could possibly achieve all this in a mere three and a half billion years?" In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution.

  15. CAVITY CONTROL ALGORITHM

    SciTech Connect

    Tomasz Plawski, J. Hovater

    2010-09-01

    A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.

  16. Prediction of Warfarin Dose in Pediatric Patients: An Evaluation of the Predictive Performance of Several Models

    PubMed Central

    Marek, Elizabeth; Momper, Jeremiah D.; Hines, Ronald N.; Takao, Cheryl M.; Gill, Joan C.; Pravica, Vera; Gaedigk, Andrea; Neville, Kathleen A.

    2016-01-01

    OBJECTIVES: The objective of this study was to evaluate the performance of pediatric pharmacogenetic-based dose prediction models by using an independent cohort of pediatric patients from a multicenter trial. METHODS: Clinical and genetic data (CYP2C9 [cytochrome P450 2C9] and VKORC1 [vitamin K epoxide reductase]) were collected from pediatric patients aged 3 months to 17 years who were receiving warfarin as part of standard care at 3 separate clinical sites. The accuracy of 8 previously published pediatric pharmacogenetic-based dose models was evaluated in the validation cohort by comparing predicted maintenance doses to actual stable warfarin doses. The predictive ability was assessed by using the proportion of variance (R2), mean prediction error (MPE), and the percentage of predictions that fell within 20% of the actual maintenance dose. RESULTS: Thirty-two children reached a stable international normalized ratio and were included in the validation cohort. The pharmacogenetic-based warfarin dose models showed a proportion of variance ranging from 35% to 78% and an MPE ranging from −2.67 to 0.85 mg/day in the validation cohort. Overall, the model developed by Hamberg et al showed the best performance in the validation cohort (R2 = 78%; MPE = 0.15 mg/day) with 38% of the predictions falling within 20% of observed doses. CONCLUSIONS: Pharmacogenetic-based algorithms provide better predictions than a fixed-dose approach, although an optimal dose algorithm has not yet been developed. PMID:27453700

  17. Helical tomotherapy superficial dose measurements

    SciTech Connect

    Ramsey, Chester R.; Seibert, Rebecca M.; Robison, Benjamin; Mitchell, Martha

    2007-08-15

    Helical tomotherapy is a treatment technique that is delivered from a 6 MV fan beam that traces a helical path while the couch moves linearly into the bore. In order to increase the treatment delivery dose rate, helical tomotherapy systems do not have a flattening filter. As such, the dose distributions near the surface of the patient may be considerably different from other forms of intensity-modulated delivery. The purpose of this study was to measure the dose distributions near the surface for helical tomotherapy plans with a varying separation between the target volume and the surface of an anthropomorphic phantom. A hypothetical planning target volume (PTV) was defined on an anthropomorphic head phantom to simulate a 2.0 Gy per fraction IMRT parotid-sparing head and neck treatment of the upper neck nodes. A total of six target volumes were created with 0, 1, 2, 3, 4, and 5 mm of separation between the surface of the phantom and the outer edge of the PTV. Superficial doses were measured for each of the treatment deliveries using film placed in the head phantom and thermoluminescent dosimeters (TLDs) placed on the phantom's surface underneath an immobilization mask. In the 0 mm test case where the PTV extends to the phantom surface, the mean TLD dose was 1.73{+-}0.10 Gy (or 86.6{+-}5.1% of the prescribed dose). The measured superficial dose decreases to 1.23{+-}0.10 Gy (61.5{+-}5.1% of the prescribed dose) for a PTV-surface separation of 5 mm. The doses measured by the TLDs indicated that the tomotherapy treatment planning system overestimates superficial doses by 8.9{+-}3.2%. The radiographic film dose for the 0 mm test case was 1.73{+-}0.07 Gy, as compared to the calculated dose of 1.78{+-}0.05 Gy. Given the results of the TLD and film measurements, the superficial calculated doses are overestimated between 3% and 13%. Without the use of bolus, tumor volumes that extend to the surface may be underdosed. As such, it is recommended that bolus be added for these

  18. Adaptive continuous twisting algorithm

    NASA Astrophysics Data System (ADS)

    Moreno, Jaime A.; Negrete, Daniel Y.; Torres-González, Victor; Fridman, Leonid

    2016-09-01

    In this paper, an adaptive continuous twisting algorithm (ACTA) is presented. For double integrator, ACTA produces a continuous control signal ensuring finite time convergence of the states to zero. Moreover, the control signal generated by ACTA compensates the Lipschitz perturbation in finite time, i.e. its value converges to the opposite value of the perturbation. ACTA also keeps its convergence properties, even in the case that the upper bound of the derivative of the perturbation exists, but it is unknown.

  19. Quantum defragmentation algorithm

    SciTech Connect

    Burgarth, Daniel; Giovannetti, Vittorio

    2010-08-15

    In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.

  20. Basic cluster compression algorithm

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.

    1980-01-01

    Feature extraction and data compression of LANDSAT data is accomplished by BCCA program which reduces costs associated with transmitting, storing, distributing, and interpreting multispectral image data. Algorithm uses spatially local clustering to extract features from image data to describe spectral characteristics of data set. Approach requires only simple repetitive computations, and parallel processing can be used for very high data rates. Program is written in FORTRAN IV for batch execution and has been implemented on SEL 32/55.

  1. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  2. The Loop Algorithm

    NASA Astrophysics Data System (ADS)

    Evertz, Hans Gerd

    1998-03-01

    Exciting new investigations have recently become possible for strongly correlated systems of spins, bosons, and fermions, through Quantum Monte Carlo simulations with the Loop Algorithm (H.G. Evertz, G. Lana, and M. Marcu, Phys. Rev. Lett. 70, 875 (1993).) (For a recent review see: H.G. Evertz, cond- mat/9707221.) and its generalizations. A review of this new method, its generalizations and its applications is given, including some new results. The Loop Algorithm is based on a formulation of physical models in an extended ensemble of worldlines and graphs, and is related to Swendsen-Wang cluster algorithms. It performs nonlocal changes of worldline configurations, determined by local stochastic decisions. It overcomes many of the difficulties of traditional worldline simulations. Computer time requirements are reduced by orders of magnitude, through a corresponding reduction in autocorrelations. The grand-canonical ensemble (e.g. varying winding numbers) is naturally simulated. The continuous time limit can be taken directly. Improved Estimators exist which further reduce the errors of measured quantities. The algorithm applies unchanged in any dimension and for varying bond-strengths. It becomes less efficient in the presence of strong site disorder or strong magnetic fields. It applies directly to locally XYZ-like spin, fermion, and hard-core boson models. It has been extended to the Hubbard and the tJ model and generalized to higher spin representations. There have already been several large scale applications, especially for Heisenberg-like models, including a high statistics continuous time calculation of quantum critical exponents on a regularly depleted two-dimensional lattice of up to 20000 spatial sites at temperatures down to T=0.01 J.

  3. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  4. Nanoparticle-based cancer treatment: can delivered dose and biological dose be reliably modeled and quantified?

    NASA Astrophysics Data System (ADS)

    Hoopes, P. Jack; Petryk, Alicia A.; Giustini, Andrew J.; Stigliano, Robert V.; D'Angelo, Robert N.; Tate, Jennifer A.; Cassim, Shiraz M.; Foreman, Allan; Bischof, John C.; Pearce, John A.; Ryan, Thomas

    2011-03-01

    Essential developments in the reliable and effective use of heat in medicine include: 1) the ability to model energy deposition and the resulting thermal distribution and tissue damage (Arrhenius models) over time in 3D, 2) the development of non-invasive thermometry and imaging for tissue damage monitoring, and 3) the development of clinically relevant algorithms for accurate prediction of the biological effect resulting from a delivered thermal dose in mammalian cells, tissues, and organs. The accuracy and usefulness of this information varies with the type of thermal treatment, sensitivity and accuracy of tissue assessment, and volume, shape, and heterogeneity of the tumor target and normal tissue. That said, without the development of an algorithm that has allowed the comparison and prediction of the effects of hyperthermia in a wide variety of tumor and normal tissues and settings (cumulative equivalent minutes/ CEM), hyperthermia would never have achieved clinical relevance. A new hyperthermia technology, magnetic nanoparticle-based hyperthermia (mNPH), has distinct advantages over the previous techniques: the ability to target the heat to individual cancer cells (with a nontoxic nanoparticle), and to excite the nanoparticles noninvasively with a noninjurious magnetic field, thus sparing associated normal cells and greatly improving the therapeutic ratio. As such, this modality has great potential as a primary and adjuvant cancer therapy. Although the targeted and safe nature of the noninvasive external activation (hysteretic heating) are a tremendous asset, the large number of therapy based variables and the lack of an accurate and useful method for predicting, assessing and quantifying mNP dose and treatment effect is a major obstacle to moving the technology into routine clinical practice. Among other parameters, mNPH will require the accurate determination of specific nanoparticle heating capability, the total nanoparticle content and biodistribution in

  5. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  6. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  7. Evaluating super resolution algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun

    2011-01-01

    This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.

  8. Dosimetric comparison of Acuros™ BV with AAPM TG43 dose calculation formalism in breast interstitial high-dose-rate brachytherapy with the use of metal catheters

    PubMed Central

    Nagarajan, Vivekanandan; Reddy K, Sathyanarayana; Karunanidhi, Gunaseelan; Singhavajala, Vivekanandam

    2015-01-01

    Purpose Radiotherapy for breast cancer includes different techniques and methods. The purpose of this study is to compare dosimetric calculations using TG-43 dose formalism and Varian Acuros™ BV (GBBS) dose calculation algorithm for interstitial implant of breast using metal catheters in high-dose-rate (HDR) brachytherapy, using 192Ir. Material and methods Twenty patients who were considered for breast conservative surgery (BCS), underwent lumpectomy and axillary dissection. These patients received perioperative interstitial HDR brachytherapy as upfront boost using rigid metal implants. Whole breast irradiation was delivered TG-43 after a gap of two weeks. Standard brachytherapy dose calculation was done by dosimetry. This does not take into account tissue heterogeneity, attenuation and scatter in the metal applicator, and effects of patient boundary. Acuros™ BV is a Grid Based Boltzmann Solver code (GBBS), which takes into consideration all the above, was used to compute dosimetry and the two systems were compared. Results Comparison of GBBS and TG-43 formalism on interstitial metal catheters shows difference in dose prescribed to CTV and other OARs. While the estimated dose to CTV was only marginally different with the two systems, there is a significant difference in estimated doses of starting from 4 to 53% in the mean value of all parameters analyzed. Conclusions TG-43 algorithm seems to significantly overestimate the dose to various volumes of interest; GBBS based dose calculation algorithm has impact on CTV, heart, ipsilateral lung, heart, contralateral breast, skin, and ribs of the ipsilateral breast side; the prescription changes occurred due to effect of metal catheters, inhomogeneities, and scatter conditions. PMID:26622230

  9. Optimal radiotherapy dose schedules under parametric uncertainty

    NASA Astrophysics Data System (ADS)

    Badri, Hamidreza; Watanabe, Yoichi; Leder, Kevin

    2016-01-01

    We consider the effects of parameter uncertainty on the optimal radiation schedule in the context of the linear-quadratic model. Our interest arises from the observation that if inter-patient variability in normal and tumor tissue radiosensitivity or sparing factor of the organs-at-risk (OAR) are not accounted for during radiation scheduling, the performance of the therapy may be strongly degraded or the OAR may receive a substantially larger dose than the allowable threshold. This paper proposes a stochastic radiation scheduling concept to incorporate inter-patient variability into the scheduling optimization problem. Our method is based on a probabilistic approach, where the model parameters are given by a set of random variables. Our probabilistic formulation ensures that our constraints are satisfied with a given probability, and that our objective function achieves a desired level with a stated probability. We used a variable transformation to reduce the resulting optimization problem to two dimensions. We showed that the optimal solution lies on the boundary of the feasible region and we implemented a branch and bound algorithm to find the global optimal solution. We demonstrated how the configuration of optimal schedules in the presence of uncertainty compares to optimal schedules in the absence of uncertainty (conventional schedule). We observed that in order to protect against the possibility of the model parameters falling into a region where the conventional schedule is no longer feasible, it is required to avoid extremal solutions, i.e. a single large dose or very large total dose delivered over a long period. Finally, we performed numerical experiments in the setting of head and neck tumors including several normal tissues to reveal the effect of parameter uncertainty on optimal schedules and to evaluate the sensitivity of the solutions to the choice of key model parameters.

  10. Bayesian estimation of dose thresholds

    NASA Technical Reports Server (NTRS)

    Groer, P. G.; Carnes, B. A.

    2003-01-01

    An example is described of Bayesian estimation of radiation absorbed dose thresholds (subsequently simply referred to as dose thresholds) using a specific parametric model applied to a data set on mice exposed to 60Co gamma rays and fission neutrons. A Weibull based relative risk model with a dose threshold parameter was used to analyse, as an example, lung cancer mortality and determine the posterior density for the threshold dose after single exposures to 60Co gamma rays or fission neutrons from the JANUS reactor at Argonne National Laboratory. The data consisted of survival, censoring times and cause of death information for male B6CF1 unexposed and exposed mice. The 60Co gamma whole-body doses for the two exposed groups were 0.86 and 1.37 Gy. The neutron whole-body doses were 0.19 and 0.38 Gy. Marginal posterior densities for the dose thresholds for neutron and gamma radiation were calculated with numerical integration and found to have quite different shapes. The density of the threshold for 60Co is unimodal with a mode at about 0.50 Gy. The threshold density for fission neutrons declines monotonically from a maximum value at zero with increasing doses. The posterior densities for all other parameters were similar for the two radiation types.

  11. Low-dose CT reconstruction via edge-preserving total variation regularization

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Jia, Xun; Yuan, Kehong; Pan, Tinsu; Jiang, Steve B.

    2011-09-01

    High radiation dose in computed tomography (CT) scans increases the lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with total variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low-contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV (EPTV) regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing energy consisting of an EPTV norm and a data fidelity term posed by the x-ray projections. The EPTV term is proposed to preferentially perform smoothing only on the non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original TV norm. During the reconstruction process, the pixels at the edges would be gradually identified and given low penalty weight. Our iterative algorithm is implemented on graphics processing unit to improve its speed. We test our reconstruction algorithm on a digital NURBS-based cardiac-troso phantom, a physical chest phantom and a Catphan phantom. Reconstruction results from a conventional filtered backprojection (FBP) algorithm and a TV regularization method without edge-preserving penalty are also presented for comparison purposes. The experimental results illustrate that both the TV-based algorithm and our EPTV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under a low-dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low-contrast structures and therefore maintain acceptable spatial resolution.

  12. Design of robust systolic algorithms

    SciTech Connect

    Varman, P.J.; Fussell, D.S.

    1983-01-01

    A primary reason for the susceptibility of systolic algorithms to faults is their strong dependence on the interconnection between the processors in a systolic array. A technique to transform any linear systolic algorithm into an equivalent pipelined algorithm that executes on arbitrary trees is presented. 5 references.

  13. Multipartite entanglement in quantum algorithms

    SciTech Connect

    Bruss, D.; Macchiavello, C.

    2011-05-15

    We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyze the multipartite entanglement properties in the Deutsch-Jozsa, Grover, and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.

  14. Two Meanings of Algorithmic Mathematics.

    ERIC Educational Resources Information Center

    Maurer, Stephen B.

    1984-01-01

    Two mathematical topics are interpreted from the viewpoints of traditional (performing algorithms) and contemporary (creating algorithms and thinking in terms of them for solving problems and developing theory) algorithmic mathematics. The two topics are Horner's method for evaluating polynomials and Gauss's method for solving systems of linear…

  15. Algorithm for Constructing Contour Plots

    NASA Technical Reports Server (NTRS)

    Johnson, W.; Silva, F.

    1984-01-01

    General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.

  16. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.

    1990-09-01

    This monthly report summarizes the technical progress and project status for the Hanford Environmental Dose Reconstruction (HEDR) Project being conducted at the Pacific Northwest Laboratory (PNL) under the direction of a Technical Steering Panel (TSP). The TSP is composed of experts in numerous technical fields related to this project and represents the interests of the public. The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms, environmental transport, environmental monitoring data, demographics, agriculture, food habits, environmental pathways and dose estimates. 3 figs.

  17. The clinical algorithm nosology: a method for comparing algorithmic guidelines.

    PubMed

    Pearson, S D; Margolis, C Z; Davis, S; Schreier, L K; Gottlieb, L K

    1992-01-01

    Concern regarding the cost and quality of medical care has led to a proliferation of competing clinical practice guidelines. No technique has been described for determining objectively the degree of similarity between alternative guidelines for the same clinical problem. The authors describe the development of the Clinical Algorithm Nosology (CAN), a new method to compare one form of guideline: the clinical algorithm. The CAN measures overall design complexity independent of algorithm content, qualitatively describes the clinical differences between two alternative algorithms, and then scores the degree of similarity between them. CAN algorithm design-complexity scores correlated highly with clinicians' estimates of complexity on an ordinal scale (r = 0.86). Five pairs of clinical algorithms addressing three topics (gallstone lithotripsy, thyroid nodule, and sinusitis) were selected for interrater reliability testing of the CAN clinical-similarity scoring system. Raters categorized the similarity of algorithm pathways in alternative algorithms as "identical," "similar," or "different." Interrater agreement was achieved on 85/109 scores (80%), weighted kappa statistic, k = 0.73. It is concluded that the CAN is a valid method for determining the structural complexity of clinical algorithms, and a reliable method for describing differences and scoring the similarity between algorithms for the same clinical problem. In the future, the CAN may serve to evaluate the reliability of algorithm development programs, and to support providers and purchasers in choosing among alternative clinical guidelines.

  18. MO-PIS-Exhibit Hall-01: Imaging: CT Dose Optimization Technologies I

    SciTech Connect

    Denison, K; Smith, S

    2014-06-15

    Partners in Solutions is an exciting new program in which AAPM partners with our vendors to present practical “hands-on” information about the equipment and software systems that we use in our clinics. The imaging topic this year is CT scanner dose optimization capabilities. Note that the sessions are being held in a special purpose room built on the Exhibit Hall Floor, to encourage further interaction with the vendors. Dose Optimization Capabilities of GE Computed Tomography Scanners Presentation Time: 11:15 – 11:45 AM GE Healthcare is dedicated to the delivery of high quality clinical images through the development of technologies, which optimize the application of ionizing radiation. In computed tomography, dose management solutions fall into four categories: employs projection data and statistical modeling to decrease noise in the reconstructed image - creating an opportunity for mA reduction in the acquisition of diagnostic images. Veo represents true Model Based Iterative Reconstruction (MBiR). Using high-level algorithms in tandem with advanced computing power, Veo enables lower pixel noise standard deviation and improved spatial resolution within a single image. Advanced Adaptive Image Filters allow for maintenance of spatial resolution while reducing image noise. Examples of adaptive image space filters include Neuro 3-D filters and Cardiac Noise Reduction Filters. AutomA adjusts mA along the z-axis and is the CT equivalent of auto exposure control in conventional x-ray systems. Dynamic Z-axis Tracking offers an additional opportunity for dose reduction in helical acquisitions while SmartTrack Z-axis Tracking serves to ensure beam, collimator and detector alignment during tube rotation. SmartmA provides angular mA modulation. ECG Helical Modulation reduces mA during the systolic phase of the heart cycle. SmartBeam optimization uses bowtie beam-shaping hardware and software to filter off-axis x-rays - minimizing dose and reducing x-ray scatter. The

  19. Radioactive Dose Assessment and NRC Verification of Licensee Dose Calculation.

    1994-09-16

    Version 00 PCDOSE was developed for the NRC to perform calculations to determine radioactive dose due to the annual averaged offsite release of liquid and gaseous effluent by U.S commercial nuclear power facilities. Using NRC approved dose assessment methodologies, it acts as an inspector's tool for verifying the compliance of the facility's dose assessment software. PCDOSE duplicates the calculations of the GASPAR II mainframe code as well as calculations using the methodologices of Reg. Guidemore » 1.109 Rev. 1 and NUREG-0133 by optional choice.« less

  20. Comparison of pencil-beam, collapsed-cone and Monte-Carlo algorithms in radiotherapy treatment planning for 6-MV photons

    NASA Astrophysics Data System (ADS)

    Kim, Sung Jin; Kim, Sung Kyu; Kim, Dong Ho

    2015-07-01

    Treatment planning system calculations in inhomogeneous regions may present significant inaccuracies due to loss of electronic equilibrium. In this study, three different dose calculation algorithms, pencil beam (PB), collapsed cone (CC), and Monte-Carlo (MC), provided by our planning system were compared to assess their impact on the three-dimensional planning of lung and breast cases. A total of five breast and five lung cases were calculated by using the PB, CC, and MC algorithms. Planning treatment volume (PTV) and organs at risk (OARs) delineations were performed according to our institution's protocols on the Oncentra MasterPlan image registration module, on 0.3-0.5 cm computed tomography (CT) slices taken under normal respiration conditions. Intensitymodulated radiation therapy (IMRT) plans were calculated for the three algorithm for each patient. The plans were conducted on the Oncentra MasterPlan (PB and CC) and CMS Monaco (MC) treatment planning systems for 6 MV. The plans were compared in terms of the dose distribution in target, the OAR volumes, and the monitor units (MUs). Furthermore, absolute dosimetry was measured using a three-dimensional diode array detector (ArcCHECK) to evaluate the dose differences in a homogeneous phantom. Comparing the dose distributions planned by using the PB, CC, and MC algorithms, the PB algorithm provided adequate coverage of the PTV. The MUs calculated using the PB algorithm were less than those calculated by using. The MC algorithm showed the highest accuracy in terms of the absolute dosimetry. Differences were found when comparing the calculation algorithms. The PB algorithm estimated higher doses for the target than the CC and the MC algorithms. The PB algorithm actually overestimated the dose compared with those calculated by using the CC and the MC algorithms. The MC algorithm showed better accuracy than the other algorithms.

  1. SU-E-J-89: Motion Effects On Organ Dose in Respiratory Gated Stereotactic Body Radiation Therapy

    SciTech Connect

    Wang, T; Zhu, L; Khan, M; Landry, J; Rajpara, R; Hawk, N

    2014-06-01

    Purpose: Existing reports on gated radiation therapy focus mainly on optimizing dose delivery to the target structure. This work investigates the motion effects on radiation dose delivered to organs at risk (OAR) in respiratory gated stereotactic body radiation therapy (SBRT). A new algorithmic tool of dose analysis is developed to evaluate the optimality of gating phase for dose sparing on OARs while ensuring adequate target coverage. Methods: Eight patients with pancreatic cancer were treated on a phase I prospective study employing 4DCT-based SBRT. For each patient, 4DCT scans are acquired and sorted into 10 respiratory phases (inhale-exhale- inhale). Treatment planning is performed on the average CT image. The average CT is spatially registered to other phases. The resultant displacement field is then applied on the plan dose map to estimate the actual dose map for each phase. Dose values of each voxel are fitted to a sinusoidal function. Fitting parameters of dose variation, mean delivered dose and optimal gating phase for each voxel over respiration cycle are mapped on the dose volume. Results: The sinusoidal function accurately models the dose change during respiratory motion (mean fitting error 4.6%). In the eight patients, mean dose variation is 3.3 Gy on OARs with maximum of 13.7 Gy. Two patients have about 100cm{sup 3} volumes covered by more than 5 Gy deviation. The mean delivered dose maps are similar to plan dose with slight deformation. The optimal gating phase highly varies across the patient, with phase 5 or 6 on about 60% of the volume, and phase 0 on most of the rest. Conclusion: A new algorithmic tool is developed to conveniently quantify dose deviation on OARs from plan dose during the respiratory cycle. The proposed software facilitates the treatment planning process by providing the optimal respiratory gating phase for dose sparing on each OAR.

  2. Monte Carlo- versus pencil-beam-/collapsed-cone-dose calculation in a heterogeneous multi-layer phantom

    NASA Astrophysics Data System (ADS)

    Krieger, Thomas; Sauer, Otto A.

    2005-03-01

    The aim of this work was to evaluate the accuracy of dose predicted in heterogeneous media by a pencil beam (PB), a collapsed cone (CC) and a Monte Carlo (MC) algorithm. For this purpose, a simple multi-layer phantom composed of Styrofoam and white polystyrene was irradiated with 10 × 10 cm2 as well as 20 × 20 cm2 open 6 MV photon fields. The beam axis was aligned parallel to the layers and various field offsets were applied. Thereby, the amount of lateral scatter was controlled. Dose measurements were performed with an ionization chamber positioned both in the central layer of white polystyrene and the adjacent layers of Styrofoam. It was found that, in white polystyrene, both MC and CC calculations agreed satisfactorily with the measurements whereas the PB algorithm calculated 12% higher doses on average. By studying off-axis dose profiles the observed differences in the calculation results increased dramatically for the three algorithms. In the regions of low density CC calculated 10% (8%) lower doses for the 10 × 10 cm2 (20 × 20 cm2) fields than MC. The MC data on the other hand agreed well with the measurements, presuming that proper replacement correction for the ionization chamber embedded in Styrofoam was performed. PB results evidently did not account for the scattering geometry and were therefore not really comparable. Our investigations showed that the PB algorithm generates very large errors for the dose in the vicinity of interfaces and within low-density regions. We also found that for the used CC algorithm large deviations for the absolute dose (dose/monitor unit) occur in regions of electronic disequilibrium. The performance might be improved by better adapted parameters. Therefore, we recommend a careful investigation of the accuracy for dose calculations in heterogeneous media for each beam data set and algorithm.

  3. The Assessment of Effective Dose Equivalent Using Personnel Dosimeters

    NASA Astrophysics Data System (ADS)

    Xu, Xie

    From January 1994, U.S. nuclear plants must develop a technically rigorous approach for determining the effective dose equivalent for their work forces. This dissertation explains concepts associated with effective dose equivalent and describes how to assess effective dose equivalent by using conventional personnel dosimetry measurements. A Monte Carlo computer code, MCNP, was used to calculate photon transport through a model of the human body. Published mathematical phantoms of the human adult male and female were used to simulate irradiation from a variety of external radiation sources in order to calculate organ and tissue doses, as well as effective dose equivalent using weighting factors from ICRP Publication 26. The radiation sources considered were broad parallel photon beams incident on the body from 91 different angles and isotropic point sources located at 234 different locations in contact with or near the body. Monoenergetic photons of 0.08, 0.3, and 1.0 MeV were considered for both sources. Personnel dosimeters were simulated on the surface of the body and exposed to with the same sources. From these data, the influence of dosimeter position on dosimeter response was investigated. Different algorithms for assessing effective dose equivalent from personnel dosimeter responses were proposed and evaluated. The results indicate that the current single-badge approach is satisfactory for most common exposure situations encountered in nuclear plants, but additional conversion factors may be used when more accurate results become desirable. For uncommon exposures involving source situated at the back of the body or source located overhead, the current approach of using multi-badges and assigning the highest dose is overly conservative and unnecessarily expensive. For these uncommon exposures, a new algorithm, based on two dosimeters, one on the front of the body and another one on the back of the body, has been shown to yield conservative assessment of

  4. Objective assessment of image quality and dose reduction in CT iterative reconstruction

    SciTech Connect

    Vaishnav, J. Y. Jung, W. C.; Popescu, L. M.; Zeng, R.; Myers, K. J.

    2014-07-15

    Purpose: Iterative reconstruction (IR) algorithms have the potential to reduce radiation dose in CT diagnostic imaging. As these algorithms become available on the market, a standardizable method of quantifying the dose reduction that a particular IR method can achieve would be valuable. Such a method would assist manufacturers in making promotional claims about dose reduction, buyers in comparing different devices, physicists in independently validating the claims, and the United States Food and Drug Administration in regulating the labeling of CT devices. However, the nonlinear nature of commercially available IR algorithms poses challenges to objectively assessing image quality, a necessary step in establishing the amount of dose reduction that a given IR algorithm can achieve without compromising that image quality. This review paper seeks to consolidate information relevant to objectively assessing the quality of CT IR images, and thereby measuring the level of dose reduction that a given IR algorithm can achieve. Methods: The authors discuss task-based methods for assessing the quality of CT IR images and evaluating dose reduction. Results: The authors explain and review recent literature on signal detection and localization tasks in CT IR image quality assessment, the design of an appropriate phantom for these tasks, possible choices of observers (including human and model observers), and methods of evaluating observer performance. Conclusions: Standardizing the measurement of dose reduction is a problem of broad interest to the CT community and to public health. A necessary step in the process is the objective assessment of CT image quality, for which various task-based methods may be suitable. This paper attempts to consolidate recent literature that is relevant to the development and implementation of task-based methods for the assessment of CT IR image quality.

  5. Improved multiprocessor garbage collection algorithms

    SciTech Connect

    Newman, I.A.; Stallard, R.P.; Woodward, M.C.

    1983-01-01

    Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.

  6. Effect of tube current modulation for dose estimation using a simulation tool on body CT examination.

    PubMed

    Kawaguchi, Ai; Matsunaga, Yuta; Kobayashi, Masanao; Suzuki, Shoichi; Matsubara, Kosuke; Chida, Koichi

    2015-12-01

    The purpose of this study was to evaluate the effect of tube current modulation for dose estimation of a body computed tomography (CT) examination using a simulation tool. The authors also compared longitudinal variations in tube current values between iterative reconstruction (IR) and filtered back-projection (FBP) reconstruction algorithms. One hundred patients underwent body CT examinations. The tube current values around 10 organ regions were recorded longitudinally from tube current information. The organ and effective doses were simulated by average tube current values and longitudinal modulated tube current values. The organ doses for the bladder and breast estimated by longitudinal modulated tube current values were 20 % higher and 25 % lower than those estimated using the average tube current values, respectively. The differences in effective doses were small (mean, 0.7 mSv). The longitudinal variations in tube current values were almost the same for the IR and FBP algorithms.

  7. Multi dose computed tomography image fusion based on hybrid sparse methodology.

    PubMed

    Venkataraman, Anuyogam; Alirezaie, Javad; Babyn, Paul; Ahmadian, Alireza

    2014-01-01

    With the increasing utilization of X-ray Computed Tomography (CT) in medical diagnosis, obtaining higher quality image with lower exposure to radiation has become a highly challenging task in image processing. In this paper, a novel sparse fusion algorithm is proposed to address the problem of lower Signal to Noise Ratio (SNR) in low dose CT images. Initial fused image is obtained by combining low dose and medium dose images in sparse domain, utilizing the Dual Tree Complex Wavelet Transform (DTCWT) dictionary which is trained by high dose image. And then, the strongly focused image is obtained by determining the pixels of source images which have high similarity with the pixels of the initial fused image. Final denoised image is obtained by fusing strongly focused image and decomposed sparse vectors of source images, thereby preserving the edges and other critical information needed for diagnosis. This paper demonstrates the effectiveness of the proposed algorithm both quantitatively and qualitatively. PMID:25570844

  8. Inverse modeling of FIB milling by dose profile optimization

    NASA Astrophysics Data System (ADS)

    Lindsey, S.; Waid, S.; Hobler, G.; Wanzenböck, H. D.; Bertagnolli, E.

    2014-12-01

    FIB technologies possess a unique ability to form topographies that are difficult or impossible to generate with binary etching through typical photo-lithography. The ability to arbitrarily vary the spatial dose distribution and therefore the amount of milling opens possibilities for the production of a wide range of functional structures with applications in biology, chemistry, and optics. However in practice, the realization of these goals is made difficult by the angular dependence of the sputtering yield and redeposition effects that vary as the topography evolves. An inverse modeling algorithm that optimizes dose profiles, defined as the superposition of time invariant pixel dose profiles (determined from the beam parameters and pixel dwell times), is presented. The response of the target to a set of pixel dwell times in modeled by numerical continuum simulations utilizing 1st and 2nd order sputtering and redeposition, the resulting surfaces are evaluated with respect to a target topography in an error minimization routine. Two algorithms for the parameterization of pixel dwell times are presented, a direct pixel dwell time method, and an abstracted method that uses a refineable piecewise linear cage function to generate pixel dwell times from a minimal number of parameters. The cage function method demonstrates great flexibility and efficiency as compared to the direct fitting method with performance enhancements exceeding ∼10× as compared to direct fitting for medium to large simulation sets. Furthermore, the refineable nature of the cage function enables solutions to adapt to the desired target function. The optimization algorithm, although working with stationary dose profiles, is demonstrated to be applicable also outside the quasi-static approximation. Experimental data confirms the viability of the solutions for 5 × 7 μm deep lens like structures defined by 90 pixel dwell times.

  9. Radiation dose estimates for radiopharmaceuticals

    SciTech Connect

    Stabin, M.G.; Stubbs, J.B.; Toohey, R.E.

    1996-04-01

    Tables of radiation dose estimates based on the Cristy-Eckerman adult male phantom are provided for a number of radiopharmaceuticals commonly used in nuclear medicine. Radiation dose estimates are listed for all major source organs, and several other organs of interest. The dose estimates were calculated using the MIRD Technique as implemented in the MIRDOSE3 computer code, developed by the Oak Ridge Institute for Science and Education, Radiation Internal Dose Information Center. In this code, residence times for source organs are used with decay data from the MIRD Radionuclide Data and Decay Schemes to produce estimates of radiation dose to organs of standardized phantoms representing individuals of different ages. The adult male phantom of the Cristy-Eckerman phantom series is different from the MIRD 5, or Reference Man phantom in several aspects, the most important of which is the difference in the masses and absorbed fractions for the active (red) marrow. The absorbed fractions for flow energy photons striking the marrow are also different. Other minor differences exist, but are not likely to significantly affect dose estimates calculated with the two phantoms. Assumptions which support each of the dose estimates appears at the bottom of the table of estimates for a given radiopharmaceutical. In most cases, the model kinetics or organ residence times are explicitly given. The results presented here can easily be extended to include other radiopharmaceuticals or phantoms.

  10. A new minimax algorithm

    NASA Technical Reports Server (NTRS)

    Vardi, A.

    1984-01-01

    The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.

  11. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  12. MLP iterative construction algorithm

    NASA Astrophysics Data System (ADS)

    Rathbun, Thomas F.; Rogers, Steven K.; DeSimio, Martin P.; Oxley, Mark E.

    1997-04-01

    The MLP Iterative Construction Algorithm (MICA) designs a Multi-Layer Perceptron (MLP) neural network as it trains. MICA adds Hidden Layer Nodes one at a time, separating classes on a pair-wise basis, until the data is projected into a linear separable space by class. Then MICA trains the Output Layer Nodes, which results in an MLP that achieves 100% accuracy on the training data. MICA, like Backprop, produces an MLP that is a minimum mean squared error approximation of the Bayes optimal discriminant function. Moreover, MICA's training technique yields novel feature selection technique and hidden node pruning technique

  13. Improved calibration of mass stopping power in low density tissue for a proton pencil beam algorithm.

    PubMed

    Warren, Daniel R; Partridge, Mike; Hill, Mark A; Peach, Ken

    2015-06-01

    Dose distributions for proton therapy treatments are almost exclusively calculated using pencil beam algorithms. An essential input to these algorithms is the patient model, derived from x-ray computed tomography (CT), which is used to estimate proton stopping power along the pencil beam paths. This study highlights a potential inaccuracy in the mapping between mass density and proton stopping power used by a clinical pencil beam algorithm in materials less dense than water. It proposes an alternative physically-motivated function (the mass average, or MA, formula) for use in this region. Comparisons are made between dose-depth curves calculated by the pencil beam method and those calculated by the Monte Carlo particle transport code MCNPX in a one-dimensional lung model. Proton range differences of up to 3% are observed between the methods, reduced to  <1% when using the MA function. The impact of these range errors on clinical dose distributions is demonstrated using treatment plans for a non-small cell lung cancer patient. The change in stopping power calculation methodology results in relatively minor differences in dose when plans use three fields, but differences are observed at the 2%-2 mm level when a single field uniform dose technique is adopted. It is therefore suggested that the MA formula is adopted by users of the pencil beam algorithm for optimal dose calculation in lung, and that a similar approach is considered when beams traverse other low density regions such as the paranasal sinuses and mastoid process.

  14. Improved calibration of mass stopping power in low density tissue for a proton pencil beam algorithm

    NASA Astrophysics Data System (ADS)

    Warren, Daniel R.; Partridge, Mike; Hill, Mark A.; Peach, Ken

    2015-06-01

    Dose distributions for proton therapy treatments are almost exclusively calculated using pencil beam algorithms. An essential input to these algorithms is the patient model, derived from x-ray computed tomography (CT), which is used to estimate proton stopping power along the pencil beam paths. This study highlights a potential inaccuracy in the mapping between mass density and proton stopping power used by a clinical pencil beam algorithm in materials less dense than water. It proposes an alternative physically-motivated function (the mass average, or MA, formula) for use in this region. Comparisons are made between dose-depth curves calculated by the pencil beam method and those calculated by the Monte Carlo particle transport code MCNPX in a one-dimensional lung model. Proton range differences of up to 3% are observed between the methods, reduced to  <1% when using the MA function. The impact of these range errors on clinical dose distributions is demonstrated using treatment plans for a non-small cell lung cancer patient. The change in stopping power calculation methodology results in relatively minor differences in dose when plans use three fields, but differences are observed at the 2%-2 mm level when a single field uniform dose technique is adopted. It is therefore suggested that the MA formula is adopted by users of the pencil beam algorithm for optimal dose calculation in lung, and that a similar approach is considered when beams traverse other low density regions such as the paranasal sinuses and mastoid process.

  15. Online Planning Algorithm

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  16. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  17. Comparing the accuracy of four-dimensional photon dose calculations with three-dimensional calculations using moving and deforming phantoms

    SciTech Connect

    Vinogradskiy, Yevgeniy Y.; Balter, Peter; Followill, David S.; Alvarez, Paola E.; White, R. Allen; Starkschall, George

    2009-11-15

    Purpose: Four-dimensional (4D) dose calculation algorithms, which explicitly incorporate respiratory motion in the calculation of doses, have the potential to improve the accuracy of dose calculations in thoracic treatment planning; however, they generally require greater computing power and resources than currently used for three-dimensional (3D) dose calculations. The purpose of this work was to quantify the increase in accuracy of 4D dose calculations versus 3D dose calculations. Methods: The accuracy of each dose calculation algorithm was assessed using measurements made with two phantoms. Specifically, the authors used a rigid moving anthropomorphic thoracic phantom and an anthropomorphic thoracic phantom with a deformable lung insert. To incorporate a clinically relevant range of scenarios, they programed the phantoms to move and deform with two motion patterns: A sinusoidal motion pattern and an irregular motion pattern that was extracted from an actual patient's breathing profile. For each combination of phantom and motion pattern, three plans were created: A single-beam plan, a multiple-beam plan, and an intensity-modulated radiation therapy plan. Doses were calculated using 4D dose calculation methods as well as conventional 3D dose calculation methods. The rigid moving and deforming phantoms were irradiated according to the three treatment plans and doses were measured using thermoluminescent dosimeters (TLDs) and radiochromic film. The accuracy of each dose calculation algorithm was assessed using measured-to-calculated TLD doses and a {gamma} analysis. Results: No significant differences were observed between the measured-to-calculated TLD ratios among 4D and 3D dose calculations. The {gamma} results revealed that 4D dose calculations had significantly greater percentage of pixels passing the 5%/3 mm criteria than 3D dose calculations. Conclusions: These results indicate no significant differences in the accuracy between the 4D and the 3D dose

  18. Dose impact in radiographic lung injury following lung SBRT: Statistical analysis and geometric interpretation

    SciTech Connect

    Yu, Victoria; Kishan, Amar U.; Cao, Minsong; Low, Daniel; Lee, Percy; Ruan, Dan

    2014-03-15

    Purpose: To demonstrate a new method of evaluating dose response of treatment-induced lung radiographic injury post-SBRT (stereotactic body radiotherapy) treatment and the discovery of bimodal dose behavior within clinically identified injury volumes. Methods: Follow-up CT scans at 3, 6, and 12 months were acquired from 24 patients treated with SBRT for stage-1 primary lung cancers or oligometastic lesions. Injury regions in these scans were propagated to the planning CT coordinates by performing deformable registration of the follow-ups to the planning CTs. A bimodal behavior was repeatedly observed from the probability distribution for dose values within the deformed injury regions. Based on a mixture-Gaussian assumption, an Expectation-Maximization (EM) algorithm was used to obtain characteristic parameters for such distribution. Geometric analysis was performed to interpret such parameters and infer the critical dose level that is potentially inductive of post-SBRT lung injury. Results: The Gaussian mixture obtained from the EM algorithm closely approximates the empirical dose histogram within the injury volume with good consistency. The average Kullback-Leibler divergence values between the empirical differential dose volume histogram and the EM-obtained Gaussian mixture distribution were calculated to be 0.069, 0.063, and 0.092 for the 3, 6, and 12 month follow-up groups, respectively. The lower Gaussian component was located at approximately 70% prescription dose (35 Gy) for all three follow-up time points. The higher Gaussian component, contributed by the dose received by planning target volume, was located at around 107% of the prescription dose. Geometrical analysis suggests the mean of the lower Gaussian component, located at 35 Gy, as a possible indicator for a critical dose that induces lung injury after SBRT. Conclusions: An innovative and improved method for analyzing the correspondence between lung radiographic injury and SBRT treatment dose has

  19. Poster — Thur Eve — 27: Flattening Filter Free VMAT Quality Assurance: Dose Rate Considerations for Detector Response

    SciTech Connect

    Viel, Francis; Duzenli, Cheryl; Camborde, Marie-Laure; Strgar, Vincent; Horwood, Ron; Atwal, Parmveer; Gete, Ermias; Karan, Tania

    2014-08-15

    Introduction: Radiation detector responses can be affected by dose rate. Due to higher dose per pulse and wider range of mu rates in FFF beams, detector responses should be characterized prior to implementation of QA protocols for FFF beams. During VMAT delivery, the MU rate may also vary dramatically within a treatment fraction. This study looks at the dose per pulse variation throughout a 3D volume for typical VMAT plans and the response characteristics for a variety of detectors, and makes recommendations on the design of QA protocols for FFF VMAT QA. Materials and Methods: Linac log file data and a simplified dose calculation algorithm are used to calculate dose per pulse for a variety of clinical VMAT plans, on a voxel by voxel basis, as a function of time in a cylindrical phantom. Diode and ion chamber array responses are characterized over the relevant range of dose per pulse and dose rate. Results: Dose per pulse ranges from <0.1 mGy/pulse to 1.5 mGy/pulse in a typical VMAT treatment delivery using the 10XFFF beam. Diode detector arrays demonstrate increased sensitivity to dose (+./− 3%) with increasing dose per pulse over this range. Ion chamber arrays demonstrate decreased sensitivity to dose (+/− 1%) with increasing dose rate over this range. Conclusions: QA protocols should be designed taking into consideration inherent changes in detector sensitivity with dose rate. Neglecting to account for changes in detector response with dose per pulse can lead to skewed QA results.

  20. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1991-01-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is being managed and conducted by the Pacific Northwest Laboratory (PNL) under the direction of an independent Technical Steering Panel (TSP). The TSP consists of experts in environmental pathways, epidemiology, surface-water transport, ground-water transport, statistics, demography, agriculture, meteorology, nuclear engineering, radiation dosimetry, and cultural anthropology. Included are appointed technical members representing the states of Oregon and Washington, a representative of Native American tribes, and an individual representing the public. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on human (dose estimates): Source Terms; Environmental Transport; Environmental Monitoring Data; Demographics, Agriculture, Food Habits and; Environmental Pathways and Dose Estimates.

  1. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    McMakin, A.H.; Cannon, S.D.; Finch, S.M.

    1992-07-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The TSP consists of experts in environmental pathways, epidemiology, surface-water transport, ground-water transport, statistics, demography, agriculture, meteorology, nuclear engineering, radiation dosimetry, and cultural anthropology. Included are appointed technical members representing the states of Oregon, Washington, and Idaho, a representative of Native American tribes, and an individual representing the public. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates): Source terms, environmental transport, environmental monitoring data, demography, food consumption, and agriculture, and environmental pathways and dose estimates. Progress is discussed.

  2. Gamma Radiation Doses In Sweden

    SciTech Connect

    Almgren, Sara; Isaksson, Mats; Barregaard, Lars

    2008-08-07

    Gamma dose rate measurements were performed in one urban and one rural area using thermoluminescence dosimeters (TLD) worn by 46 participants and placed in their dwellings. The personal effective dose rates were 0.096{+-}0.019(1 SD) and 0.092{+-}0.016(1 SD){mu}Sv/h in the urban and rural area, respectively. The corresponding dose rates in the dwellings were 0.11{+-}0.042(1 SD) and 0.091{+-}0.026(1 SD){mu}Sv/h. However, the differences between the areas were not significant. The values were higher in buildings made of concrete than of wood and higher in apartments than in detached houses. Also, {sup 222}Rn measurements were performed in each dwelling, which showed no correlation with the gamma dose rates in the dwellings.

  3. Estimate Radiological Dose for Animals

    1997-12-18

    Estimate Radiological dose for animals in ecological environment using open literature values for parameters such as body weight, plant and soil ingestion rate, rad. halflife, absorbed energy, biological halflife, gamma energy per decay, soil-to-plant transfer factor, ...etc

  4. Selective source blocking for Gamma Knife radiosurgery of trigeminal neuralgia based on analytical dose modelling

    NASA Astrophysics Data System (ADS)

    Li, Kaile; Ma, Lijun

    2004-08-01

    We have developed an automatic critical region shielding (ACRS) algorithm for Gamma Knife radiosurgery of trigeminal neuralgia. The algorithm selectively blocks 201 Gamma Knife sources to minimize the dose to the brainstem while irradiating the root entry area of the trigeminal nerve with 70-90 Gy. An independent dose model was developed to implement the algorithm. The accuracy of the dose model was tested and validated via comparison with the Leksell GammaPlan (LGP) calculations. Agreements of 3% or 3 mm in isodose distributions were found for both single-shot and multiple-shot treatment plans. After the optimized blocking patterns are obtained via the independent dose model, they are imported into the LGP for final dose calculations and treatment planning analyses. We found that the use of a moderate number of source plugs (30-50 plugs) significantly lowered (~40%) the dose to the brainstem for trigeminal neuralgia treatments. Considering the small effort involved in using these plugs, we recommend source blocking for all trigeminal neuralgia treatments with Gamma Knife radiosurgery.

  5. Monte Carlo dose calculations for phantoms with hip prostheses

    NASA Astrophysics Data System (ADS)

    Bazalova, M.; Coolens, C.; Cury, F.; Childs, P.; Beaulieu, L.; Verhaegen, F.

    2008-02-01

    Computed tomography (CT) images of patients with hip prostheses are severely degraded by metal streaking artefacts. The low image quality makes organ contouring more difficult and can result in large dose calculation errors when Monte Carlo (MC) techniques are used. In this work, the extent of streaking artefacts produced by three common hip prosthesis materials (Ti-alloy, stainless steel, and Co-Cr-Mo alloy) was studied. The prostheses were tested in a hypothetical prostate treatment with five 18 MV photon beams. The dose distributions for unilateral and bilateral prosthesis phantoms were calculated with the EGSnrc/DOSXYZnrc MC code. This was done in three phantom geometries: in the exact geometry, in the original CT geometry, and in an artefact-corrected geometry. The artefact-corrected geometry was created using a modified filtered back-projection correction technique. It was found that unilateral prosthesis phantoms do not show large dose calculation errors, as long as the beams miss the artefact-affected volume. This is possible to achieve in the case of unilateral prosthesis phantoms (except for the Co-Cr-Mo prosthesis which gives a 3% error) but not in the case of bilateral prosthesis phantoms. The largest dose discrepancies were obtained for the bilateral Co-Cr-Mo hip prosthesis phantom, up to 11% in some voxels within the prostate. The artefact correction algorithm worked well for all phantoms and resulted in dose calculation errors below 2%. In conclusion, a MC treatment plan should include an artefact correction algorithm when treating patients with hip prostheses.

  6. Technical basis for dose reconstruction

    SciTech Connect

    Anspaugh, L.R.

    1996-01-31

    The purpose of this paper is to consider two general topics: technical considerations of why dose-reconstruction studies should or should not be performed and methods of dose reconstruction. The first topic is of general and growing interest as the number of dose-reconstruction studies increases, and one asks the question whether it is necessary to perform a dose reconstruction for virtually every site at which, for example, the Department of Energy (DOE) has operated a nuclear-related facility. And there is the broader question of how one might logically draw the line at performing or not performing dose-reconstruction (radiological and chemical) studies for virtually every industrial complex in the entire country. The second question is also of general interest. There is no single correct way to perform a dose-reconstruction study, and it is important not to follow blindly a single method to the point that cheaper, faster, more accurate, and more transparent methods might not be developed and applied.

  7. Ultraviolet radiation cataract: dose dependence

    NASA Astrophysics Data System (ADS)

    Soderberg, Per G.; Loefgren, Stefan

    1994-07-01

    Current safety limits for cataract development after acute exposure to ultraviolet radiation (UVR) are based on experiments analyzing experimental data with a quantal, effect-no effect, dose-response model. The present study showed that intensity of forward light scattering is better described with a continuous dose-response model. It was found that 3, 30 and 300 kJ/m2UVR300nm induces increased light scattering within 6 h. For all three doses the intensity of forward light scattering was constant after 6 h. The intensity of forward light scattering was proportional to the log dose of UVR300nm. There was a slight increase of the intensity of forward light scattering on the contralateral side in animals that received 300 kJ/m2. Altogether 72 Sprague-Dawley male rats were included. Half of the rats were exposed in vivo on one side to UVR300nm. The other half was kept as a control group, receiving the same treatment as exposed rats but without delivery of UVR300nm to the eye. Subgroups of the rats received either of the three doses. Rats were sacrificed at varying intervals after the exposure. The lenses were extracted and the forward light scattering was estimated. It is concluded that intensity of forward light scattering in the lens after exposure to UVR300nm should be described with a continuous dose-reponse model.

  8. Weldon Spring historical dose estimate

    SciTech Connect

    Meshkov, N.; Benioff, P.; Wang, J.; Yuan, Y.

    1986-07-01

    This study was conducted to determine the estimated radiation doses that individuals in five nearby population groups and the general population in the surrounding area may have received as a consequence of activities at a uranium processing plant in Weldon Spring, Missouri. The study is retrospective and encompasses plant operations (1957-1966), cleanup (1967-1969), and maintenance (1969-1982). The dose estimates for members of the nearby population groups are as follows. Of the three periods considered, the largest doses to the general population in the surrounding area would have occurred during the plant operations period (1957-1966). Dose estimates for the cleanup (1967-1969) and maintenance (1969-1982) periods are negligible in comparison. Based on the monitoring data, if there was a person residing continually in a dwelling 1.2 km (0.75 mi) north of the plant, this person is estimated to have received an average of about 96 mrem/yr (ranging from 50 to 160 mrem/yr) above background during plant operations, whereas the dose to a nearby resident during later years is estimated to have been about 0.4 mrem/yr during cleanup and about 0.2 mrem/yr during the maintenance period. These values may be compared with the background dose in Missouri of 120 mrem/yr.

  9. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  10. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    PubMed

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  11. Assessing the effect of electron density in photon dose calculations

    SciTech Connect

    Seco, J.; Evans, P. M.

    2006-02-15

    Photon dose calculation algorithms (such as the pencil beam and collapsed cone, CC) model the attenuation of a primary photon beam in media other than water, by using pathlength scaling based on the relative mass density of the media to water. In this study, we assess if differences in the electron density between the water and media, with different atomic composition, can influence the accuracy of conventional photon dose calculations algorithms. A comparison is performed between an electron-density scaling method and the standard mass-density scaling method for (i) tissues present in the human body (such as bone, muscle, etc.), and for (ii) water-equivalent plastics, used in radiotherapy dosimetry and quality assurance. We demonstrate that the important material property that should be taken into account by photon dose algorithms is the electron density, and not the mass density. The mass-density scaling method is shown to overestimate, relative to electron-density predictions, the primary photon fluence for tissues in the human body and water-equivalent plastics, where 6%-7% and 10% differences were observed respectively for bone and air. However, in the case of patients, differences are expected to be smaller due to the large complexity of a treatment plan and of the patient anatomy and atomic composition and of the smaller thickness of bone/air that incident photon beams of a treatment plan may have to traverse. Differences have also been observed for conventional dose algorithms, such as CC, where an overestimate of the lung dose occurs, when irradiating lung tumors. The incorrect lung dose can be attributed to the incorrect modeling of the photon beam attenuation through the rib cage (thickness of 2-3 cm in bone upstream of the lung tumor) and through the lung and the oversimplified modeling of electron transport in convolution algorithms. In the present study, the overestimation of the primary photon fluence, using the mass-density scaling method, was shown

  12. Extrapolation of the dna fragment-size distribution after high-dose irradiation to predict effects at low doses

    NASA Technical Reports Server (NTRS)

    Ponomarev, A. L.; Cucinotta, F. A.; Sachs, R. K.; Brenner, D. J.; Peterson, L. E.

    2001-01-01

    The patterns of DSBs induced in the genome are different for sparsely and densely ionizing radiations: In the former case, the patterns are well described by a random-breakage model; in the latter, a more sophisticated tool is needed. We used a Monte Carlo algorithm with a random-walk geometry of chromatin, and a track structure defined by the radial distribution of energy deposition from an incident ion, to fit the PFGE data for fragment-size distribution after high-dose irradiation. These fits determined the unknown parameters of the model, enabling the extrapolation of data for high-dose irradiation to the low doses that are relevant for NASA space radiation research. The randomly-located-clusters formalism was used to speed the simulations. It was shown that only one adjustable parameter, Q, the track efficiency parameter, was necessary to predict DNA fragment sizes for wide ranges of doses. This parameter was determined for a variety of radiations and LETs and was used to predict the DSB patterns at the HPRT locus of the human X chromosome after low-dose irradiation. It was found that high-LET radiation would be more likely than low-LET radiation to induce additional DSBs within the HPRT gene if this gene already contained one DSB.

  13. Validation of a track repeating algorithm for intensity modulated proton therapy: clinical cases study.

    PubMed

    Yepes, Pablo P; Eley, John G; Liu, Amy; Mirkovic, Dragan; Randeniya, Sharmalee; Titt, Uwe; Mohan, Radhe

    2016-04-01

    Monte Carlo (MC) methods are acknowledged as the most accurate technique to calculate dose distributions. However, due its lengthy calculation times, they are difficult to utilize in the clinic or for large retrospective studies. Track-repeating algorithms, based on MC-generated particle track data in water, accelerate dose calculations substantially, while essentially preserving the accuracy of MC. In this study, we present the validation of an efficient dose calculation algorithm for intensity modulated proton therapy, the fast dose calculator (FDC), based on a track-repeating technique. We validated the FDC algorithm for 23 patients, which included 7 brain, 6 head-and-neck, 5 lung, 1 spine, 1 pelvis and 3 prostate cases. For validation, we compared FDC-generated dose distributions with those from a full-fledged Monte Carlo based on GEANT4 (G4). We compared dose-volume-histograms, 3D-gamma-indices and analyzed a series of dosimetric indices. More than 99% of the voxels in the voxelized phantoms describing the patients have a gamma-index smaller than unity for the 2%/2 mm criteria. In addition the difference relative to the prescribed dose between the dosimetric indices calculated with FDC and G4 is less than 1%. FDC reduces the calculation times from 5 ms per proton to around 5 μs. PMID:26961764

  14. Validation of a track repeating algorithm for intensity modulated proton therapy: clinical cases study.

    PubMed

    Yepes, Pablo P; Eley, John G; Liu, Amy; Mirkovic, Dragan; Randeniya, Sharmalee; Titt, Uwe; Mohan, Radhe

    2016-04-01

    Monte Carlo (MC) methods are acknowledged as the most accurate technique to calculate dose distributions. However, due its lengthy calculation times, they are difficult to utilize in the clinic or for large retrospective studies. Track-repeating algorithms, based on MC-generated particle track data in water, accelerate dose calculations substantially, while essentially preserving the accuracy of MC. In this study, we present the validation of an efficient dose calculation algorithm for intensity modulated proton therapy, the fast dose calculator (FDC), based on a track-repeating technique. We validated the FDC algorithm for 23 patients, which included 7 brain, 6 head-and-neck, 5 lung, 1 spine, 1 pelvis and 3 prostate cases. For validation, we compared FDC-generated dose distributions with those from a full-fledged Monte Carlo based on GEANT4 (G4). We compared dose-volume-histograms, 3D-gamma-indices and analyzed a series of dosimetric indices. More than 99% of the voxels in the voxelized phantoms describing the patients have a gamma-index smaller than unity for the 2%/2 mm criteria. In addition the difference relative to the prescribed dose between the dosimetric indices calculated with FDC and G4 is less than 1%. FDC reduces the calculation times from 5 ms per proton to around 5 μs.

  15. Validation of a track repeating algorithm for intensity modulated proton therapy: clinical cases study

    NASA Astrophysics Data System (ADS)

    Yepes, Pablo P.; Eley, John G.; Liu, Amy; Mirkovic, Dragan; Randeniya, Sharmalee; Titt, Uwe; Mohan, Radhe

    2016-04-01

    Monte Carlo (MC) methods are acknowledged as the most accurate technique to calculate dose distributions. However, due its lengthy calculation times, they are difficult to utilize in the clinic or for large retrospective studies. Track-repeating algorithms, based on MC-generated particle track data in water, accelerate dose calculations substantially, while essentially preserving the accuracy of MC. In this study, we present the validation of an efficient dose calculation algorithm for intensity modulated proton therapy, the fast dose calculator (FDC), based on a track-repeating technique. We validated the FDC algorithm for 23 patients, which included 7 brain, 6 head-and-neck, 5 lung, 1 spine, 1 pelvis and 3 prostate cases. For validation, we compared FDC-generated dose distributions with those from a full-fledged Monte Carlo based on GEANT4 (G4). We compared dose-volume-histograms, 3D-gamma-indices and analyzed a series of dosimetric indices. More than 99% of the voxels in the voxelized phantoms describing the patients have a gamma-index smaller than unity for the 2%/2 mm criteria. In addition the difference relative to the prescribed dose between the dosimetric indices calculated with FDC and G4 is less than 1%. FDC reduces the calculation times from 5 ms per proton to around 5 μs.

  16. Multisensor data fusion algorithm development

    SciTech Connect

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  17. SU-E-T-344: Validation and Clinical Experience of Eclipse Electron Monte Carlo Algorithm (EMC)

    SciTech Connect

    Pokharel, S; Rana, S

    2014-06-01

    Purpose: The purpose of this study is to validate Eclipse Electron Monte Carlo (Algorithm for routine clinical uses. Methods: The PTW inhomogeneity phantom (T40037) with different combination of heterogeneous slabs has been CT-scanned with Philips Brilliance 16 slice scanner. The phantom contains blocks of Rando Alderson materials mimicking lung, Polystyrene (Tissue), PTFE (Bone) and PMAA. The phantom has 30×30×2.5 cm base plate with 2cm recesses to insert inhomogeneity. The detector systems used in this study are diode, tlds and Gafchromic EBT2 films. The diode and tlds were included in CT scans. The CT sets are transferred to Eclipse treatment planning system. Several plans have been created with Eclipse Monte Carlo (EMC) algorithm 11.0.21. Measurements have been carried out in Varian TrueBeam machine for energy from 6–22mev. Results: The measured and calculated doses agreed very well for tissue like media. The agreement was reasonably okay for the presence of lung inhomogeneity. The point dose agreement was within 3.5% and Gamma passing rate at 3%/3mm was greater than 93% except for 6Mev(85%). The disagreement can reach as high as 10% in the presence of bone inhomogeneity. This is due to eclipse reporting dose to the medium as opposed to the dose to the water as in conventional calculation engines. Conclusion: Care must be taken when using Varian Eclipse EMC algorithm for dose calculation for routine clinical uses. The algorithm dose not report dose to water in which most of the clinical experiences are based on rather it just reports dose to medium directly. In the presence of inhomogeneity such as bone, the dose discrepancy can be as high as 10% or even more depending on the location of normalization point or volume. As Radiation oncology as an empirical science, care must be taken before using EMC reported monitor units for clinical uses.

  18. Efficient Kriging Algorithms

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2011-01-01

    More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.

  19. Audio detection algorithms

    NASA Astrophysics Data System (ADS)

    Neta, B.; Mansager, B.

    1992-08-01

    Audio information concerning targets generally includes direction, frequencies, and energy levels. One use of audio cueing is to use direction information to help determine where more sensitive visual direction and acquisition sensors should be directed. Generally, use of audio cueing will shorten times required for visual detection, although there could be circumstances where the audio information is misleading and degrades visual performance. Audio signatures can also be useful for helping classify the emanating platform, as well as to provide estimates of its velocity. The Janus combat simulation is the premier high resolution model used by the Army and other agencies to conduct research. This model has a visual detection model which essentially incorporates algorithms as described by Hartman(1985). The model in its current form does not have any sound cueing capability. This report is part of a research effort to investigate the utility of developing such a capability.

  20. Fighting Censorship with Algorithms

    NASA Astrophysics Data System (ADS)

    Mahdian, Mohammad

    In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.

  1. MO-G-18A-01: Radiation Dose Reducing Strategies in CT, Fluoroscopy and Radiography

    SciTech Connect

    Mahesh, M; Gingold, E; Jones, A

    2014-06-15

    Advances in medical x-ray imaging have provided significant benefits to patient care. According to NCRP 160, there are more than 400 million x-ray procedures performed annually in the United States alone that contributes to nearly half of all the radiation exposure to the US population. Similar growth trends in medical x-ray imaging are observed worldwide. Apparent increase in number of medical x-ray imaging procedures, new protocols and the associated radiation dose and risk has drawn considerable attention. This has led to a number of technological innovations such as tube current modulation, iterative reconstruction algorithms, dose alerts, dose displays, flat panel digital detectors, high efficient digital detectors, storage phosphor radiography, variable filters, etc. that are enabling users to acquire medical x-ray images at a much lower radiation dose. Along with these, there are number of radiation dose optimization strategies that users can adapt to effectively lower radiation dose in medical x-ray procedures. The main objectives of this SAM course are to provide information and how to implement the various radiation dose optimization strategies in CT, Fluoroscopy and Radiography. Learning Objectives: To update impact of technological advances on dose optimization in medical imaging. To identify radiation optimization strategies in computed tomography. To describe strategies for configuring fluoroscopic equipment that yields optimal images at reasonable radiation dose. To assess ways to configure digital radiography systems and recommend ways to improve image quality at optimal dose.

  2. Effect of Breathing Motion on Radiotherapy Dose Accumulation in the Abdomen Using Deformable Registration

    SciTech Connect

    Velec, Michael; Moseley, Joanne L.; Eccles, Cynthia L.; Craig, Tim; Sharpe, Michael B.; Dawson, Laura A.; Brock, Kristy K.

    2011-05-01

    Purpose: To investigate the effect of breathing motion and dose accumulation on the planned radiotherapy dose to liver tumors and normal tissues using deformable image registration. Methods and Materials: Twenty-one free-breathing stereotactic liver cancer radiotherapy patients, planned on static exhale computed tomography (CT) for 27-60 Gy in six fractions, were included. A biomechanical model-based deformable image registration algorithm retrospectively deformed each exhale CT to inhale CT. This deformation map was combined with exhale and inhale dose grids from the treatment planning system to accumulate dose over the breathing cycle. Accumulation was also investigated using a simple rigid liver-to-liver registration. Changes to tumor and normal tissue dose were quantified. Results: Relative to static plans, mean dose change (range) after deformable dose accumulation (as % of prescription dose) was -1 (-14 to 8) to minimum tumor, -4 (-15 to 0) to maximum bowel, -4 (-25 to 1) to maximum duodenum, 2 (-1 to 9) to maximum esophagus, -2 (-13 to 4) to maximum stomach, 0 (-3 to 4) to mean liver, and -1 (-5 to 1) and -2 (-7 to 1) to mean left and right kidneys. Compared to deformable registration, rigid modeling had changes up to 8% to minimum tumor and 7% to maximum normal tissues. Conclusion: Deformable registration and dose accumulation revealed potentially significant dose changes to either a tumor or normal tissue in the majority of cases as a result of breathing motion. These changes may not be accurately accounted for with rigid motion.

  3. Dose-schedule finding in phase I/II clinical trials using a Bayesian isotonic transformation

    PubMed Central

    Li, Yisheng; Bekele, B. Nebiyou; Ji, Yuan; Cook, John D.

    2015-01-01

    Summary A dose-schedule-finding trial is a new type of oncology trial in which investigators aim to find a combination of dose and treatment schedule that has a large probability of efficacy yet a relatively small probability of toxicity. We demonstrate that a major difference between traditional dose-finding and dose-schedule-finding trials is that while the toxicity probabilities follow a simple nondecreasing order in dose-finding trials, those of dose-schedule-finding trials may adhere to a matrix order. We show that the success of a dose-schedule-finding method requires careful statistical modeling and a sensible dose-schedule allocation scheme. We propose a Bayesian hierarchical model that jointly models the unordered probabilities of toxicity and efficacy, and apply a Bayesian isotonic transformation to the posterior samples of the toxicity probabilities, so that the transformed posterior samples adhere to the matrix order constraints. Based on the joint posterior distribution of the order-constrained toxicity probabilities and the unordered efficacy probabilities, we develop a dose-schedule-finding algorithm that sequentially allocates patients to the best dose-schedule combination under certain criteria. We illustrate our methodology through its application to a clinical trial in leukemia, and compare it to two alternative approaches. PMID:18563789

  4. A comparison of three optimization algorithms for intensity modulated radiation therapy.

    PubMed

    Pflugfelder, Daniel; Wilkens, Jan J; Nill, Simeon; Oelfke, Uwe

    2008-01-01

    In intensity modulated treatment techniques, the modulation of each treatment field is obtained using an optimization algorithm. Multiple optimization algorithms have been proposed in the literature, e.g. steepest descent, conjugate gradient, quasi-Newton methods to name a few. The standard optimization algorithm in our in-house inverse planning tool KonRad is a quasi-Newton algorithm. Although this algorithm yields good results, it also has some drawbacks. Thus we implemented an improved optimization algorithm based on the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) routine. In this paper the improved optimization algorithm is described. To compare the two algorithms, several treatment plans are optimized using both algorithms. This included photon (IMRT) as well as proton (IMPT) intensity modulated therapy treatment plans. To present the results in a larger context the widely used conjugate gradient algorithm was also included into this comparison. On average, the improved optimization algorithm was six times faster to reach the same objective function value. However, it resulted not only in an acceleration of the optimization. Due to the faster convergence, the improved optimization algorithm usually terminates the optimization process at a lower objective function value. The average of the observed improvement in the objective function value was 37%. This improvement is clearly visible in the corresponding dose-volume-histograms. The benefit of the improved optimization algorithm is particularly pronounced in proton therapy plans. The conjugate gradient algorithm ranked in between the other two algorithms with an average speedup factor of two and an average improvement of the objective function value of 30%.

  5. Noise Reduction in Low-Dose X-Ray Fluoroscopy for Image-Guided Radiation Therapy

    SciTech Connect

    Wang Jing Zhu Lei; Xing Lei

    2009-06-01

    Purpose: To improve the quality of low-dose X-ray fluoroscopic images using statistics-based restoration algorithm so that the patient fluoroscopy can be performed with reduced radiation dose. Method and Materials: Noise in the low-dose fluoroscopy was suppressed by temporal and spatial filtering. The temporal correlation among neighboring frames was considered by the Karhunen-Loeve (KL) transform (i.e., principal component analysis). After the KL transform, the selected neighboring frames of fluoroscopy were decomposed to uncorrelated and ordered principal components. For each KL component, a penalized weighted least-squares (PWLS) objective function was constructed to restore the ideal image. The penalty was chosen as anisotropic quadratic, and the penalty parameter in each KL component was inversely proportional to its corresponding eigenvalue. Smaller KL eigenvalue is associated with the KL component of lower signal-to-noise ratio (SNR), and a larger penalty parameter should be used for such KL component. The low-dose fluoroscopic images were acquired using a Varian Acuity simulator. A quality assurance phantom and an anthropomorphic chest phantom were used to evaluate the presented algorithm. Results: In the images restored by the proposed KL domain PWLS algorithm, noise is greatly suppressed, whereas fine structures are well preserved. Average improvement rate of SNR is 75% among selected regions of interest. Comparison studies with traditional techniques, such as the mean and median filters, show that the proposed algorithm is advantageous in terms of structure preservation. Conclusions: The proposed noise reduction algorithm can significantly improve the quality of low-dose X-ray fluoroscopic image and allows for dose reduction in X-ray fluoroscopy.

  6. Ozone Uncertainties Study Algorithm (OUSA)

    NASA Technical Reports Server (NTRS)

    Bahethi, O. P.

    1982-01-01

    An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).

  7. Gradient maintenance: A new algorithm for fast online replanning

    SciTech Connect

    Ahunbay, Ergun E. Li, X. Allen

    2015-06-15

    Purpose: Clinical use of online adaptive replanning has been hampered by the unpractically long time required to delineate volumes based on the image of the day. The authors propose a new replanning algorithm, named gradient maintenance (GM), which does not require the delineation of organs at risk (OARs), and can enhance automation, drastically reducing planning time and improving consistency and throughput of online replanning. Methods: The proposed GM algorithm is based on the hypothesis that if the dose gradient toward each OAR in daily anatomy can be maintained the same as that in the original plan, the intended plan quality of the original plan would be preserved in the adaptive plan. The algorithm requires a series of partial concentric rings (PCRs) to be automatically generated around the target toward each OAR on the planning and the daily images. The PCRs are used in the daily optimization objective function. The PCR dose constraints are generated with dose–volume data extracted from the original plan. To demonstrate this idea, GM plans generated using daily images acquired using an in-room CT were compared to regular optimization and image guided radiation therapy repositioning plans for representative prostate and pancreatic cancer cases. Results: The adaptive replanning using the GM algorithm, requiring only the target contour from the CT of the day, can be completed within 5 min without using high-power hardware. The obtained adaptive plans were almost as good as the regular optimization plans and were better than the repositioning plans for the cases studied. Conclusions: The newly proposed GM replanning algorithm, requiring only target delineation, not full delineation of OARs, substantially increased planning speed for online adaptive replanning. The preliminary results indicate that the GM algorithm may be a solution to improve the ability for automation and may be especially suitable for sites with small-to-medium size targets surrounded by

  8. Assessment and Minimization of Contralateral Breast Dose for Conventional and Intensity Modulated Breast Radiotherapy

    SciTech Connect

    Burmeister, Jay Alvarado, Nicole; Way, Sarah; McDermott, Patrick; Bossenberger, Todd; Jaenisch, Harriett; Patel, Rajiv; Washington, Tara

    2008-04-01

    Breast radiotherapy is associated with an increased risk of contralateral breast cancer (CBC) in women under age 45 at the time of treatment. This risk increases with increasing absorbed dose to the contralateral breast. The use of intensity modulated radiotherapy (IMRT) is expected to substantially reduce the dose to the contralateral breast by eliminating scattered radiation from physical beam modifiers. The absorbed dose to the contralateral breast was measured for 5 common radiotherapy techniques, including paired 15 deg. wedges, lateral 30 deg. wedge only, custom-designed physical compensators, aperture based (field-within-field) IMRT with segments chosen by the planner, and inverse planned IMRT with segments chosen by a leaf sequencing algorithm after dose volume histogram (DVH)-based fluence map optimization. Further reduction in contralateral breast dose through the use of lead shielding was also investigated. While shielding was observed to have the most profound impact on surface dose, the radiotherapy technique proved to be most important in determining internal dose. Paired wedges or compensators result in the highest contralateral breast doses (nearly 10% of the prescription dose on the medial surface), while use of IMRT or removal of the medial wedge results in significantly lower doses. Aperture-based IMRT results in the lowest internal doses, primarily due to the decrease in the number of monitor units required and the associated reduction in leakage dose. The use of aperture-based IMRT reduced the average dose to the contralateral breast by greater than 50% in comparison to wedges or compensators. Combined use of IMRT and 1/8-inch-thick lead shielding reduced the dose to the interior and surface of the contralateral breast by roughly 60% and 85%, respectively. This reduction may warrant the use of IMRT for younger patients who have a statistically significant risk of contralateral breast cancer associated with breast radiotherapy.

  9. Messy genetic algorithms: Recent developments

    SciTech Connect

    Kargupta, H.

    1996-09-01

    Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.

  10. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  11. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  12. Dose and dose rate effectiveness of space radiation.

    PubMed

    Schimmerling, W; Cucinotta, F A

    2006-01-01

    Dose and dose rate effectiveness factors (DDREF), in conjunction with other weighting factors, are commonly used to scale atomic bomb survivor data in order to establish limits for occupational radiation exposure, including radiation exposure in space. We use some well-known facts about the microscopic pattern of energy deposition of high-energy heavy ions, and about the dose rate dependence of chemical reactions initiated by radiation, to show that DDREF are likely to vary significantly as a function of particle type and energy, cell, tissue, and organ type, and biological end point. As a consequence, we argue that validation of DDREF by conventional methods, e.g. irradiating animal colonies and compiling statistics of cancer mortality, is not appropriate. However, the use of approaches derived from information theory and thermodynamics is a very wide field, and the present work can only be understood as a contribution to an ongoing discussion. PMID:17169950

  13. Radiological dose assessment for vault storage concepts

    SciTech Connect

    Richard, R.F.

    1997-02-25

    This radiological dose assessment presents neutron and photon dose rates in support of project W-460. Dose rates are provided for a single 3013 container, the ``infloor`` storage vault concept, and the ``cubicle`` storage vault concept.

  14. SU-F-19A-03: Dosimetric Advantages in Critical Structure Dose Sparing by Using a Multichannel Cylinder in High Dose Rate Brachytherapy to Treat Vaginal Cuff Cancer

    SciTech Connect

    Syh, J; Syh, J; Patel, B; Zhang, J; Wu, H; Rosen, L

    2014-06-15

    Purpose: The multichannel cylindrical vaginal applicator is a variation of traditional single channel cylindrical vaginal applicator. The multichannel applicator has additional peripheral channels that provide more flexibility in the planning process. The dosimetric advantage is to reduce dose to adjacent organ at risk (OAR) such as bladder and rectum while maintaining target coverage with the dose optimization from additional channels. Methods: Vaginal HDR brachytherapy plans are all CT based. CT images were acquired in 2 mm thickness to keep integrity of cylinder contouring. The CTV of 5mm Rind with prescribed treatment length was reconstructed from 5mm expansion of inserted cylinder. The goal was 95% of CTV covered by 95% of prescribed dose in both single channel planning (SCP)and multichannel planning (MCP) before proceeding any further optimization for dose reduction to critical structures with emphasis on D2cc and V2Gy . Results: This study demonstrated noticeable dose reduction to OAR was apparent in multichannel plans. The D2cc of the rectum and bladder were showing the reduced dose for multichannel versus single channel. The V2Gy of the rectum was 93.72% and 83.79% (p=0.007) for single channel and multichannel respectively (Figure 1 and Table 1). To assure adequate coverage to target while reducing the dose to the OAR without any compromise is the main goal in using multichannel vaginal applicator in HDR brachytherapy. Conclusion: Multichannel plans were optimized using anatomical based inverse optimization algorithm of inverse planning simulation annealing. The optimization solution of the algorithm was to improve the clinical target volume dose coverage while reducing the dose to critical organs such as bladder, rectum and bowels. The comparison between SCP and MCP demonstrated MCP is superior to SCP where the dwell positions were based on geometric array only. It concluded that MCP is preferable and is able to provide certain features superior to SCP.

  15. Peripheral doses from pediatric IMRT

    SciTech Connect

    Klein, Eric E.; Maserang, Beth; Wood, Roy; Mansur, David

    2006-07-15

    Peripheral dose (PD) data exist for conventional fields ({>=}10 cm) and intensity-modulated radiotherapy (IMRT) delivery to standard adult-sized phantoms. Pediatric peripheral dose reports are limited to conventional therapy and are model based. Our goal was to ascertain whether data acquired from full phantom studies and/or pediatric models, with IMRT treatment times, could predict Organ at Risk (OAR) dose for pediatric IMRT. As monitor units (MUs) are greater for IMRT, it is expected IMRT PD will be higher; potentially compounded by decreased patient size (absorption). Baseline slab phantom peripheral dose measurements were conducted for very small field sizes (from 2 to 10 cm). Data were collected at distances ranging from 5 to 72 cm away from the field edges. Collimation was either with the collimating jaws or the multileaf collimator (MLC) oriented either perpendicular or along the peripheral dose measurement plane. For the clinical tests, five patients with intracranial or base of skull lesions were chosen. IMRT and conventional three-dimensional (3D) plans for the same patient/target/dose (180 cGy), were optimized without limitation to the number of fields or wedge use. Six MV, 120-leaf MLC Varian axial beams were used. A phantom mimicking a 3-year-old was configured per Center for Disease Control data. Micro (0.125 cc) and cylindrical (0.6 cc) ionization chambers were appropriated for the thyroid, breast, ovaries, and testes. The PD was recorded by electrometers set to the 10{sup -10} scale. Each system set was uniquely calibrated. For the slab phantom studies, close peripheral points were found to have a higher dose for low energy and larger field size and when MLC was not deployed. For points more distant from the field edge, the PD was higher for high-energy beams. MLC orientation was found to be inconsequential for the small fields tested. The thyroid dose was lower for IMRT delivery than that predicted for conventional (ratio of IMRT/cnventional ranged

  16. Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning

    NASA Astrophysics Data System (ADS)

    Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.

    2008-02-01

    Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.

  17. Dose calculation for electron therapy

    NASA Astrophysics Data System (ADS)

    Gebreamlak, Wondesen T.

    The dose delivered by electron beams has a complex dependence on the shape of the field; any field shaping shields, design of collimator systems, and energy of the beam. This complicated dependence is due to multiple scattering of the electron beam as the beam travels from the accelerator head to the patient. The dosimetry of only regular field shapes (circular, square, or rectangular) is well developed. However, most tumors have irregular shapes and their dosimetry is calculated by direct measurement. This is laborious and time consuming. In addition, error can be introduced during measurements. The lateral build up ratio method (LBR), which is based on the Fermi-Eyges multiple scattering theory, calculates the dosimetry of irregular electron beam shapes. The accuracy of this method depends on the function sigma r(r,E) (the mean square radial displacement of the electron beam in the medium) used in the calculation. This research focuses on improving the accuracy of electron dose calculations using lateral build up ratio method by investigating the properties of sigmar(r,E). The percentage depth dose curves of different circular cutouts were measured using four electron beam energies (6, 9, 12, and 15 MeV), four electron applicator sizes (6x6, 10x10, 14x14, and 20x20 cm), three source-surface distance values (100, 105, 110 cm). The measured percentage depth dose curves were normalized at a depth of 0.05 cm. Using the normalized depth dose, the lateral build up ratio curves were determined. Using the cutout radius and the lateral build up ratio values, sigmar(z,E) were determined. It is shown that the sigma value increases linearly with cutout size until the cutout radius reaches the equilibrium range of the electron beam. The sigma value of an arbitrary circular cutout was determined from the interpolation of sigma versus cutout curve. The corresponding LBR value of the circular cutout was determined using its radius and sigma values. The depth dose distribution of

  18. Effect of Acuros XB algorithm on monitor units for stereotactic body radiotherapy planning of lung cancer

    SciTech Connect

    Khan, Rao F. Villarreal-Barajas, Eduardo; Lau, Harold; Liu, Hong-Wei

    2014-04-01

    Stereotactic body radiotherapy (SBRT) is a curative regimen that uses hypofractionated radiation-absorbed dose to achieve a high degree of local control in early stage non–small cell lung cancer (NSCLC). In the presence of heterogeneities, the dose calculation for the lungs becomes challenging. We have evaluated the dosimetric effect of the recently introduced advanced dose-calculation algorithm, Acuros XB (AXB), for SBRT of NSCLC. A total of 97 patients with early-stage lung cancer who underwent SBRT at our cancer center during last 4 years were included. Initial clinical plans were created in Aria Eclipse version 8.9 or prior, using 6 to 10 fields with 6-MV beams, and dose was calculated using the anisotropic analytic algorithm (AAA) as implemented in Eclipse treatment planning system. The clinical plans were recalculated in Aria Eclipse 11.0.21 using both AAA and AXB algorithms. Both sets of plans were normalized to the same prescription point at the center of mass of the target. A secondary monitor unit (MU) calculation was performed using commercial program RadCalc for all of the fields. For the planning target volumes ranging from 19 to 375 cm{sup 3}, a comparison of MUs was performed for both set of algorithms on field and plan basis. In total, variation of MUs for 677 treatment fields was investigated in terms of equivalent depth and the equivalent square of the field. Overall, MUs required by AXB to deliver the prescribed dose are on an average 2% higher than AAA. Using a 2-tailed paired t-test, the MUs from the 2 algorithms were found to be significantly different (p < 0.001). The secondary independent MU calculator RadCalc underestimates the required MUs (on an average by 4% to 5%) in the lung relative to either of the 2 dose algorithms.

  19. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S. M.; McMakin, A. H.

    1991-09-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation dose that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into five technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (i.e., dose estimates). The Source Terms Task develops estimates of radioactive emissions from Hanford facilities since 1944. The Environmental Transport Task reconstructs the movements of radioactive particles from the areas of release to populations. The Environmental Monitoring Data Task assemblies, evaluates and reports historical environmental monitoring data. The Demographics, Agriculture and Food Habits Task develops the data needed to identify the populations that could have been affected by the releases. The Environmental Pathways and Dose Estimates Task used the information derived from the other Tasks to estimate the radiation doses individuals could have received from Hanford radiation. This document lists the progress on this project as of September 1991. 3 figs., 2 tabs.

  20. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.

    1990-01-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that populations could have received from nuclear operations at Hanford since 1944. The project is being managed and conducted by the Pacific Northwest Laboratory (PNL) under the direction of an independent Technical Steering Panel (TSP). The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates). The Source Terms Task develops estimates of radioactive emissions from Hanford facilities since 1944. The Environmental Transport Task reconstructs the movement of radioactive materials from the areas of release to populations. The Environmental Monitoring Data Task assembles, evaluates, and reports historical environmental monitoring data. The Demographics, Agriculture, Food Habits Task develops the data needed to identify the populations that could have been affected by the releases. In addition to population and demographic data, the food and water resources and consumption patterns for populations are estimated because they provide a primary pathway for the intake of radionuclides. The Environmental Pathways and Dose Estimates Task use the information produced by the other tasks to estimate the radiation doses populations could have received from Hanford radiation. Project progress is documented in this monthly report, which is available to the public. 3 figs., 3 tabs.

  1. AGING FACILITY WORKER DOSE ASSESSMENT

    SciTech Connect

    R.L. Thacker

    2005-03-24

    The purpose of this calculation is to estimate radiation doses received by personnel working in the Aging Facility performing operations to transfer aging casks to the aging pads for thermal and logistical management, stage empty aging casks, and retrieve aging casks from the aging pads for further processing in other site facilities. Doses received by workers due to aging cask surveillance and maintenance operations are also included. The specific scope of work contained in this calculation covers both collective doses and individual worker group doses on an annual basis, and includes the contributions due to external and internal radiation from normal operation. There are no Category 1 event sequences associated with the Aging Facility (BSC 2004 [DIRS 167268], Section 7.2.1). The results of this calculation will be used to support the design of the Aging Facility and to provide occupational dose estimates for the License Application. The calculations contained in this document were developed by Environmental and Nuclear Engineering of the Design and Engineering Organization and are intended solely for the use of the Design and Engineering Organization in its work regarding facility operation. Yucca Mountain Project personnel from the Environmental and Nuclear Engineering should be consulted before use of the calculations for purposes other than those stated herein or use by individuals other than authorized personnel in Environmental and Nuclear Engineering.

  2. Biological doses with template distribution patterns

    SciTech Connect

    Harrop, R.; Haymond, H.R.; Nisar, A.; Syed, A.N.M.; Feder, B.H.; Neblett, D.L.

    1981-02-01

    Consideration of radiation dose rate effects emphasizes advantages of the template method for lateral distribution of multiple sources in treatment of laterally infiltrating gynecologic cancer, when compared to a conventional technique with colpostats. Biological doses in time dose fractionation (TDF), ret and reu units are calculated for the two treatment methods. With the template method the lateral dose (point B) is raised without significantly increasing the doses to the rectum and bladder, that is, relatively, the calculated biological doses at point A and B are more nearly equivalent and the doses to the rectum and bladder are significantly lower than the dose to point B.

  3. Parameterization of solar flare dose

    SciTech Connect

    Lamarche, A.H.; Poston, J.W.

    1996-12-31

    A critical aspect of missions to the moon or Mars will be the safety and health of the crew. Radiation in space is a hazard for astronauts, especially high-energy radiation following certain types of solar flares. A solar flare event can be very dangerous if astronauts are not adequately shielded because flares can deliver a very high dose in a short period of time. The goal of this research was to parameterize solar flare dose as a function of time to see if it was possible to predict solar flare occurrence, thus providing a warning time. This would allow astronauts to take corrective action and avoid receiving a dose greater than the recommended limit set by the National Council on Radiation Protection and Measurements (NCRP).

  4. Radiation dose from reentrant electrons.

    PubMed

    Badhwar, G D; Watts, J; Cleghorn, T E

    2001-06-01

    In estimating the crew exposures during an extra vehicular activity (EVA), the contribution of reentrant electrons has always been neglected. Although the flux of these electrons is small compared to the flux of trapped electrons, their energy spectrum extends to several GeV compared to about 7 MeV for trapped electrons. This is also true of splash electrons. Using the measured reentrant electron energy spectra, it is shown that the dose contribution of these electrons to the blood forming organs (BFO) is more that 10 times greater than that from the trapped electrons. The calculations also show that the dose-depth response is a very slowly changing function of depth, and thus adding reasonable amounts of additional shielding would not significantly lower the dose to BFO. PMID:11855420

  5. Radiation Dose from Reentrant Electrons

    NASA Technical Reports Server (NTRS)

    Badhwar, G.D.; Cleghorn, T. E.; Watts, J.

    2003-01-01

    In estimating the crew exposures during an EVA, the contribution of reentrant electrons has always been neglected. Although the flux of these electrons is small compared to the flux of trapped electrons, their energy spectrum extends to several GeV compared to about 7 MeV for trapped electrons. This is also true of splash electrons. Using the measured reentrant electron energy spectra, it is shown that the dose contribution of these electrons to the blood forming organs (BFO) is more than 10 times greater than that from the trapped electrons. The calculations also show that the dose-depth response is a very slowly changing function of depth, and thus adding reasonable amounts of additional shielding would not significantly lower the dose to BFO.

  6. Comparison of optimization algorithms in intensity-modulated radiation therapy planning

    NASA Astrophysics Data System (ADS)

    Kendrick, Rachel

    Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.

  7. Effect of Embolization Material in the Calculation of Dose Deposition in Arteriovenous Malformations

    SciTech Connect

    De la Cruz, O. O. Galvan; Moreno-Jimenez, S.; Larraga-Gutierrez, J. M.; Celis-Lopez, M. A.

    2010-12-07

    In this work it is studied the impact of the incorporation of high Z materials (embolization material) in the dose calculation for stereotactic radiosurgery treatment for arteriovenous malformations. A statistical analysis is done to establish the variables that may impact in the dose calculation. To perform the comparison pencil beam (PB) and Monte Carlo (MC) calculation algorithms were used. The comparison between both dose calculations shows that PB overestimates the dose deposited. The statistical analysis, for the quantity of patients of the study (20), shows that the variable that may impact in the dose calculation is the volume of the high Z material in the arteriovenous malformation. Further studies have to be done to establish the clinical impact with the radiosurgery result.

  8. Effect of Embolization Material in the Calculation of Dose Deposition in Arteriovenous Malformations

    NASA Astrophysics Data System (ADS)

    De la Cruz, O. O. Galván; Lárraga-Gutiérrez, J. M.; Moreno-Jiménez, S.; Célis-López, M. A.

    2010-12-01

    In this work it is studied the impact of the incorporation of high Z materials (embolization material) in the dose calculation for stereotactic radiosurgery treatment for arteriovenous malformations. A statistical analysis is done to establish the variables that may impact in the dose calculation. To perform the comparison pencil beam (PB) and Monte Carlo (MC) calculation algorithms were used. The comparison between both dose calculations shows that PB overestimates the dose deposited. The statistical analysis, for the quantity of patients of the study (20), shows that the variable that may impact in the dose calculation is the volume of the high Z material in the arteriovenous malformation. Further studies have to be done to establish the clinical impact with the radiosurgery result.

  9. Ultra low radiation dose digital subtraction angiography (DSA) imaging using low rank constraint

    NASA Astrophysics Data System (ADS)

    Niu, Kai; Li, Yinsheng; Schafer, Sebastian; Royalty, Kevin; Wu, Yijing; Strother, Charles; Chen, Guang-Hong

    2015-03-01

    In this work we developed a novel denoising algorithm for DSA image series. This algorithm takes advantage of the low rank nature of the DSA image sequences to enable a dramatic reduction in radiation and/or contrast doses in DSA imaging. Both spatial and temporal regularizers were introduced in the optimization algorithm to further reduce noise. To validate the method, in vivo animal studies were conducted with a Siemens Artis Zee biplane system using different radiation dose levels and contrast concentrations. Both conventionally processed DSA images and the DSA images generated using the novel denoising method were compared using absolute noise standard deviation and the contrast to noise ratio (CNR). With the application of the novel denoising algorithm for DSA, image quality can be maintained with a radiation dose reduction by a factor of 20 and/or a factor of 2 reduction in contrast dose. Image processing is completed on a GPU within a second for a 10s DSA data acquisition.

  10. A comparison of quantum limited dose and noise equivalent dose

    NASA Astrophysics Data System (ADS)

    Job, Isaias D.; Boyce, Sarah J.; Petrillo, Michael J.; Zhou, Kungang

    2016-03-01

    Quantum-limited-dose (QLD) and noise-equivalent-dose (NED) are performance metrics often used interchangeably. Although the metrics are related, they are not equivalent unless the treatment of electronic noise is carefully considered. These metrics are increasingly important to properly characterize the low-dose performance of flat panel detectors (FPDs). A system can be said to be quantum-limited when the Signal-to-noise-ratio (SNR) is proportional to the square-root of x-ray exposure. Recent experiments utilizing three methods to determine the quantum-limited dose range yielded inconsistent results. To investigate the deviation in results, generalized analytical equations are developed to model the image processing and analysis of each method. We test the generalized expression for both radiographic and fluoroscopic detectors. The resulting analysis shows that total noise content of the images processed by each method are inherently different based on their readout scheme. Finally, it will be shown that the NED is equivalent to the instrumentation-noise-equivalent-exposure (INEE) and furthermore that the NED is derived from the quantum-noise-only method of determining QLD. Future investigations will measure quantum-limited performance of radiographic panels with a modified readout scheme to allow for noise improvements similar to measurements performed with fluoroscopic detectors.

  11. Algorithm Engineering - An Attempt at a Definition

    NASA Astrophysics Data System (ADS)

    Sanders, Peter

    This paper defines algorithm engineering as a general methodology for algorithmic research. The main process in this methodology is a cycle consisting of algorithm design, analysis, implementation and experimental evaluation that resembles Popper’s scientific method. Important additional issues are realistic models, algorithm libraries, benchmarks with real-world problem instances, and a strong coupling to applications. Algorithm theory with its process of subsequent modelling, design, and analysis is not a competing approach to algorithmics but an important ingredient of algorithm engineering.

  12. Algorithm Calculates Cumulative Poisson Distribution

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.

    1992-01-01

    Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).

  13. Interpolation algorithms for machine tools

    SciTech Connect

    Burleson, R.R.

    1981-08-01

    There are three types of interpolation algorithms presently used in most numerical control systems: digital differential analyzer, pulse-rate multiplier, and binary-rate multiplier. A method for higher order interpolation is in the experimental stages. The trends point toward the use of high-speed micrprocessors to perform these interpolation algorithms.

  14. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  15. Computer algorithm for coding gain

    NASA Technical Reports Server (NTRS)

    Dodd, E. E.

    1974-01-01

    Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.

  16. Algorithm for Autonomous Landing

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2011-01-01

    Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.

  17. Panniculitides, an algorithmic approach.

    PubMed

    Zelger, B

    2013-08-01

    The issue of inflammatory diseases of subcutis and its mimicries is generally considered a difficult field of dermatopathology. Yet, in my experience, with appropriate biopsies and good clinicopathological correlation, a specific diagnosis of panniculitides can usually be made. Thereby, knowledge about some basic anatomic and pathological issues is essential. Anatomy differentiates within the panniculus between the fatty lobules separated by fibrous septa. Pathologically, inflammation of panniculus is defined and recognized by an inflammatory process which leads to tissue damage and necrosis. Several types of fat necrosis are observed: xanthomatized macrophages in lipophagic necrosis; granular fat necrosis and fat micropseudocysts in liquefactive fat necrosis; mummified adipocytes in "hyalinizing" fat necrosis with/without saponification and/or calcification; and lipomembranous membranes in membranous fat necrosis. In an algorithmic approach the recognition of an inflammatory process recognized by features as elaborated above is best followed in three steps: recognition of pattern, second of subpattern, and finally of presence and composition of inflammatory cells. Pattern differentiates a mostly septal or mostly lobular distribution at scanning magnification. In the subpattern category one looks for the presence or absence of vasculitis, and, if this is the case, the size and the nature of the involved blood vessel: arterioles and small arteries or veins; capillaries or postcapillary venules. The third step will be to identify the nature of the cells present in the inflammatory infiltrate and, finally, to look for additional histopathologic features that allow for a specific final diagnosis in the language of clinical dermatology of disease involving the subcutaneous fat.

  18. SU-E-T-219: Investigation of IMRT Out-Of-Field Dose Calculation Accuracy for a Commercial Treatment Planning System

    SciTech Connect

    2014-06-01

    Purpose: Inaccuracies in out-of-field calculations could lead to underestimation of dose to organs-at-risk. This study evaluates the dose calculation accuracy of a model-based calculation algorithm at points outside the primary treatment field for an intensity modulated radiation therapy (IMRT) plan using experimental measurements. Methods: The treatment planning system investigated is Varian Eclipse V.10 with Analytical Anisotropic Algorithm (AAA). The IMRT fields investigated are from real patient treatment plans. The doses from a dynamic (DMLC) IMRT brain plan were calculated and compared with measured doses at locations outside the primary treatment fields. Measurements were performed with a MatriXX system (2-D chamber array) placed in solid water. All fields were set vertically incident on the phantom and were 9 cm × 6 cm or smaller. The dose was normalized to the central axis for points up to 15 cm off isocenter. The comparisons were performed at depths of 2, 10, 15, and 20 cm Results: The measurements have shown that AAA calculations underestimate doses at points outside the primary treatment field. The underestimation occurs at 2 cm depth and decreases down to a factor of 2 as depth increases to 20 cm. In low dose (<2% of target dose) regions outside the primary fields the local dose underestimations can be >200% compared to measured doses. Relative to the plan target dose, the measured doses to points outside the field were less than 1% at shallow depths and less than 2% at greater depths. Conclusion: Compared to measurements, the AAA algorithm underestimated the dose at points outside the treatment field with the greatest differences observed at shallow depths. Despite large local dose uncertainties predicted by the treatment planning system, the impact of these uncertainties is expected to be insignificant as doses at these points were less than 1-2% of the prescribed treatment dose.

  19. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  20. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  1. Colistin: how should it be dosed for the critically ill?

    PubMed

    Landersdorfer, Cornelia B; Nation, Roger L

    2015-02-01

    Colistin, an "old" polymyxin antibiotic, is increasingly being used as last-line treatment against infections caused by multidrug-resistant gram-negative bacteria. It is administered in patients, parenterally or by inhalation, as its inactive prodrug colistin methanesulfonate (CMS). Scientifically based recommendations on how to optimally dose colistin in critically ill patients have become available over the last decade and are extremely important as colistin has a narrow therapeutic window. A dosing algorithm has been developed to achieve desired plasma colistin concentrations in critically ill patients. This includes the necessary dose adjustments for patients with impaired kidney function and those on renal replacement therapy. Due to the slow conversion of CMS to colistin, a loading dose is needed to generate effective concentrations within a reasonable time period. Therapeutic drug monitoring is warranted, where available; because of the observed high interpatient variability in plasma colistin concentrations. Combination therapy should be considered when the infecting pathogen has a colistin minimum inhibitory concentration above 1 mg/L, as increasing the dose may not be feasible due to the risk for nephrotoxicity. Inhalation of CMS achieves considerably higher colistin concentrations in lung fluids than is possible with intravenous administration, with negligible plasma exposure. Similarly, for central nervous system infections, dosing CMS directly into the cerebrospinal fluid generates significantly higher colistin concentrations at the infection site compared with what can be achieved with systemic administration. While questions remain to be addressed via ongoing research, this article reviews the significant advances that have been made toward optimizing the clinical use of colistin.

  2. Fast Pencil Beam Dose Calculation for Proton Therapy Using a Double-Gaussian Beam Model

    PubMed Central

    da Silva, Joakim; Ansorge, Richard; Jena, Rajesh

    2015-01-01

    The highly conformal dose distributions produced by scanned proton pencil beams (PBs) are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real-time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a PB algorithm running on graphics processing units (GPUs) intended specifically for online dose calculation. Here, we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such PB algorithm for proton therapy running on a GPU. We employ two different parameterizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of PBs in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included while prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Furthermore, the calculation time is relatively unaffected by the parameterization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy. PMID:26734567

  3. Fast Pencil Beam Dose Calculation for Proton Therapy Using a Double-Gaussian Beam Model.

    PubMed

    da Silva, Joakim; Ansorge, Richard; Jena, Rajesh

    2015-01-01

    The highly conformal dose distributions produced by scanned proton pencil beams (PBs) are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real-time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a PB algorithm running on graphics processing units (GPUs) intended specifically for online dose calculation. Here, we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such PB algorithm for proton therapy running on a GPU. We employ two different parameterizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of PBs in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included while prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Furthermore, the calculation time is relatively unaffected by the parameterization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy.

  4. SU-E-T-465: Dose Calculation Method for Dynamic Tumor Tracking Using a Gimbal-Mounted Linac

    SciTech Connect

    Sugimoto, S; Inoue, T; Kurokawa, C; Usui, K; Sasai, K; Utsunomiya, S; Ebe, K

    2014-06-01

    Purpose: Dynamic tumor tracking using the gimbal-mounted linac (Vero4DRT, Mitsubishi Heavy Industries, Ltd., Japan) has been available when respiratory motion is significant. The irradiation accuracy of the dynamic tumor tracking has been reported to be excellent. In addition to the irradiation accuracy, a fast and accurate dose calculation algorithm is needed to validate the dose distribution in the presence of respiratory motion because the multiple phases of it have to be considered. A modification of dose calculation algorithm is necessary for the gimbal-mounted linac due to the degrees of freedom of gimbal swing. The dose calculation algorithm for the gimbal motion was implemented using the linear transformation between coordinate systems. Methods: The linear transformation matrices between the coordinate systems with and without gimbal swings were constructed using the combination of translation and rotation matrices. The coordinate system where the radiation source is at the origin and the beam axis along the z axis was adopted. The transformation can be divided into the translation from the radiation source to the gimbal rotation center, the two rotations around the center relating to the gimbal swings, and the translation from the gimbal center to the radiation source. After operating the transformation matrix to the phantom or patient image, the dose calculation can be performed as the no gimbal swing. The algorithm was implemented in the treatment planning system, PlanUNC (University of North Carolina, NC). The convolution/superposition algorithm was used. The dose calculations with and without gimbal swings were performed for the 3 × 3 cm{sup 2} field with the grid size of 5 mm. Results: The calculation time was about 3 minutes per beam. No significant additional time due to the gimbal swing was observed. Conclusions: The dose calculation algorithm for the finite gimbal swing was implemented. The calculation time was moderate.

  5. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  6. Evaluation of 4D dose to a moving target with Monte Carlo dose calculation in stereotactic body radiotherapy for lung cancer.

    PubMed

    Matsugi, Kiyotomo; Nakamura, Mitsuhiro; Miyabe, Yuki; Yamauchi, Chikako; Matsuo, Yukinori; Mizowaki, Takashi; Hiraoka, Masahiro

    2013-01-01

    We evaluated the four-dimensional (4D) dose to a moving target by a Monte Carlo dose calculation algorithm in stereotactic body radiation therapy (SBRT) planning based on the isocenter dose prescription. 4D computed tomography scans were performed for 12 consecutive patients who had 14 tumors. The gross tumor volume (GTV) and internal target volume (ITV) were contoured manually, and the planning target volume (PTV) was defined as the ITV with a 5-mm margin. The beam apertures were shaped into the PTV plus a 5-mm leaf margin. The prescription dose was 48 Gy in 4 fractions at the isocenter. The GTV dose was calculated by accumulation of respiratory-phase dose distributions that were mapped to a reference images, whereas the ITV and PTV doses were calculated with the respiration-averaged images. The doses to 99 % (D(99)) of the GTV, ITV, and PTV were 90.2, 89.3, and 82.0 %, respectively. The mean difference between the PTV D(99) and GTV D(99) was -9.1 % (range -13.4 to -4.0 %), and that between the ITV and GTV was -1.1 % (range -5.5 to 1.9 %). The mean homogeneity index (HI) for the GTV, ITV, and PTV was 1.14, 1.15, and 1.26, respectively. Significant differences were observed in the D(99) and HI between the PTV and GTV, whereas no significant difference was seen between the ITV and GTV. When SBRT planning is performed based on the isocenter dose prescription with a 5-mm PTV margin and a 5-mm leaf margin, the ITV dose provides a good approximation of the GTV dose.

  7. 2D/3D registration algorithm for lung brachytherapy

    SciTech Connect

    Zvonarev, P. S.; Farrell, T. J.; Hunter, R.; Wierzbicki, M.; Hayward, J. E.; Sur, R. K.

    2013-02-15

    Purpose: A 2D/3D registration algorithm is proposed for registering orthogonal x-ray images with a diagnostic CT volume for high dose rate (HDR) lung brachytherapy. Methods: The algorithm utilizes a rigid registration model based on a pixel/voxel intensity matching approach. To achieve accurate registration, a robust similarity measure combining normalized mutual information, image gradient, and intensity difference was developed. The algorithm was validated using a simple body and anthropomorphic phantoms. Transfer catheters were placed inside the phantoms to simulate the unique image features observed during treatment. The algorithm sensitivity to various degrees of initial misregistration and to the presence of foreign objects, such as ECG leads, was evaluated. Results: The mean registration error was 2.2 and 1.9 mm for the simple body and anthropomorphic phantoms, respectively. The error was comparable to the interoperator catheter digitization error of 1.6 mm. Preliminary analysis of data acquired from four patients indicated a mean registration error of 4.2 mm. Conclusions: Results obtained using the proposed algorithm are clinically acceptable especially considering the complications normally encountered when imaging during lung HDR brachytherapy.

  8. Monte Carlo dose calculation in dental amalgam phantom.

    PubMed

    Aziz, Mohd Zahri Abdul; Yusoff, A L; Osman, N D; Abdullah, R; Rabaie, N A; Salikin, M S

    2015-01-01

    It has become a great challenge in the modern radiation treatment to ensure the accuracy of treatment delivery in electron beam therapy. Tissue inhomogeneity has become one of the factors for accurate dose calculation, and this requires complex algorithm calculation like Monte Carlo (MC). On the other hand, computed tomography (CT) images used in treatment planning system need to be trustful as they are the input in radiotherapy treatment. However, with the presence of metal amalgam in treatment volume, the CT images input showed prominent streak artefact, thus, contributed sources of error. Hence, metal amalgam phantom often creates streak artifacts, which cause an error in the dose calculation. Thus, a streak artifact reduction technique was applied to correct the images, and as a result, better images were observed in terms of structure delineation and density assigning. Furthermore, the amalgam density data were corrected to provide amalgam voxel with accurate density value. As for the errors of dose uncertainties due to metal amalgam, they were reduced from 46% to as low as 2% at d80 (depth of the 80% dose beyond Zmax) using the presented strategies. Considering the number of vital and radiosensitive organs in the head and the neck regions, this correction strategy is suggested in reducing calculation uncertainties through MC calculation.

  9. A filtered backprojection algorithm for cone beam reconstruction using rotational filtering under helical source trajectory

    SciTech Connect

    Tang Xiangyang; Hsieh Jiang

    2004-11-01

    With the evolution from multi-detector-row CT to cone beam (CB) volumetric CT, maintaining reconstruction accuracy becomes more challenging. To combat the severe artifacts caused by a large cone angle in CB volumetric CT, three-dimensional reconstruction algorithms have to be utilized. In