Science.gov

Sample records for acenocoumarol dosing algorithm

  1. Pharmacogenetic-guided dosing of coumarin anticoagulants: algorithms for warfarin, acenocoumarol and phenprocoumon

    PubMed Central

    Verhoef, Talitha I; Redekop, William K; Daly, Ann K; van Schie, Rianne M F; de Boer, Anthonius; Maitland-van der Zee, Anke-Hilse

    2014-01-01

    Coumarin derivatives, such as warfarin, acenocoumarol and phenprocoumon are frequently prescribed oral anticoagulants to treat and prevent thromboembolism. Because there is a large inter-individual and intra-individual variability in dose–response and a small therapeutic window, treatment with coumarin derivatives is challenging. Certain polymorphisms in CYP2C9 and VKORC1 are associated with lower dose requirements and a higher risk of bleeding. In this review we describe the use of different coumarin derivatives, pharmacokinetic characteristics of these drugs and differences amongst the coumarins. We also describe the current clinical challenges and the role of pharmacogenetic factors. These genetic factors are used to develop dosing algorithms and can be used to predict the right coumarin dose. The effectiveness of this new dosing strategy is currently being investigated in clinical trials. PMID:23919835

  2. A New Pharmacogenetic Algorithm to Predict the Most Appropriate Dosage of Acenocoumarol for Stable Anticoagulation in a Mixed Spanish Population

    PubMed Central

    2016-01-01

    There is a strong association between genetic polymorphisms and the acenocoumarol dosage requirements. Genotyping the polymorphisms involved in the pharmacokinetics and pharmacodynamics of acenocoumarol before starting anticoagulant therapy would result in a better quality of life and a more efficient use of healthcare resources. The objective of this study is to develop a new algorithm that includes clinical and genetic variables to predict the most appropriate acenocoumarol dosage for stable anticoagulation in a wide range of patients. We recruited 685 patients from 2 Spanish hospitals and 1 primary healthcare center. We randomly chose 80% of the patients (n = 556), considering an equitable distribution of genotypes to form the generation cohort. The remaining 20% (n = 129) formed the validation cohort. Multiple linear regression was used to generate the algorithm using the acenocoumarol stable dosage as the dependent variable and the clinical and genotypic variables as the independent variables. The variables included in the algorithm were age, weight, amiodarone use, enzyme inducer status, international normalized ratio target range and the presence of CYP2C9*2 (rs1799853), CYP2C9*3 (rs1057910), VKORC1 (rs9923231) and CYP4F2 (rs2108622). The coefficient of determination (R2) explained by the algorithm was 52.8% in the generation cohort and 64% in the validation cohort. The following R2 values were evaluated by pathology: atrial fibrillation, 57.4%; valve replacement, 56.3%; and venous thromboembolic disease, 51.5%. When the patients were classified into 3 dosage groups according to the stable dosage (<11 mg/week, 11–21 mg/week, >21 mg/week), the percentage of correctly classified patients was higher in the intermediate group, whereas differences between pharmacogenetic and clinical algorithms increased in the extreme dosage groups. Our algorithm could improve acenocoumarol dosage selection for patients who will begin treatment with this drug, especially in

  3. A New Pharmacogenetic Algorithm to Predict the Most Appropriate Dosage of Acenocoumarol for Stable Anticoagulation in a Mixed Spanish Population.

    PubMed

    Tong, Hoi Y; Dávila-Fajardo, Cristina Lucía; Borobia, Alberto M; Martínez-González, Luis Javier; Lubomirov, Rubin; Perea León, Laura María; Blanco Bañares, María J; Díaz-Villamarín, Xando; Fernández-Capitán, Carmen; Cabeza Barrera, José; Carcas, Antonio J

    2016-01-01

    There is a strong association between genetic polymorphisms and the acenocoumarol dosage requirements. Genotyping the polymorphisms involved in the pharmacokinetics and pharmacodynamics of acenocoumarol before starting anticoagulant therapy would result in a better quality of life and a more efficient use of healthcare resources. The objective of this study is to develop a new algorithm that includes clinical and genetic variables to predict the most appropriate acenocoumarol dosage for stable anticoagulation in a wide range of patients. We recruited 685 patients from 2 Spanish hospitals and 1 primary healthcare center. We randomly chose 80% of the patients (n = 556), considering an equitable distribution of genotypes to form the generation cohort. The remaining 20% (n = 129) formed the validation cohort. Multiple linear regression was used to generate the algorithm using the acenocoumarol stable dosage as the dependent variable and the clinical and genotypic variables as the independent variables. The variables included in the algorithm were age, weight, amiodarone use, enzyme inducer status, international normalized ratio target range and the presence of CYP2C9*2 (rs1799853), CYP2C9*3 (rs1057910), VKORC1 (rs9923231) and CYP4F2 (rs2108622). The coefficient of determination (R2) explained by the algorithm was 52.8% in the generation cohort and 64% in the validation cohort. The following R2 values were evaluated by pathology: atrial fibrillation, 57.4%; valve replacement, 56.3%; and venous thromboembolic disease, 51.5%. When the patients were classified into 3 dosage groups according to the stable dosage (<11 mg/week, 11-21 mg/week, >21 mg/week), the percentage of correctly classified patients was higher in the intermediate group, whereas differences between pharmacogenetic and clinical algorithms increased in the extreme dosage groups. Our algorithm could improve acenocoumarol dosage selection for patients who will begin treatment with this drug, especially in

  4. Differential effects of 2C9*3 and 2C9*2 variants of cytochrome P-450 CYP2C9 on sensitivity to acenocoumarol.

    PubMed

    Hermida, José; Zarza, José; Alberca, Ignacio; Montes, Ramón; López, María Luz; Molina, Eva; Rocha, Eduardo

    2002-06-01

    The 2C9*3 and 2C9*2 polymorphisms of cytochrome P-450 CYP2C9 are associated with hypersensitivity to warfarin and bleeding. The effect of these polymorphisms on sensitivity to acenocoumarol is unknown. Three groups of patients, with low, medium, or high acenocoumarol-dose requirements, were studied. Age influenced the acenocoumarol sensitivity. Bearing the 2C9*3 allele was associated with the need for a lower acenocoumarol dose (odds ratio [OR], 6.02; 95% confidence interval [CI], 1.50-24.18); 80% of carriers of the 2C9*3 allele required a low dose. The 2C9*2 allele was associated with a lower acenocoumarol-dose requirement (OR, 2.70; 95% CI, 1.11-6.58) because of a reduced risk of the need for a high acenocoumarol dose (4.8% of the patients in the high-dose group carried the 2C9*2 allele versus 34.1% and 30.2%, respectively, in the medium-dose and low-dose groups). Therefore, carriers of 2C9*3 may need a low initial loading dose of acenocoumarol. Because acenocoumarol sensitivity with the 2C9*2 variant does not seem to be clinically relevant, the drug could be an alternative to warfarin in 2C9*2 carriers. PMID:12010835

  5. [Resistance to acenocoumarol revealing a missense mutation of the vitamin K epoxyde reductase VKORC1: a case report].

    PubMed

    Mboup, M C; Dia, K; Ba, D M; Fall, P D

    2015-02-01

    A significant proportion of the interindividual variability of the response to vitamin K antagonist (VKA) treatment has been associated with genetic factors. Genetic variations affecting the vitamin K epoxide reductase complex subunit 1 (VKORC1) are associated with hypersensitivity or rarely with resistance to VKA. We report the case of a black women patient who presents a resistance to acenocoumarol. Despite the use of high doses of acenocoumarol (114 mg/week) for the treatment of recurrent pulmonary embolism, the International Normalized Ratio was below the therapeutic target. This resistance to acenocoumarol was confirmed by the identification of a missense mutation Val66Met of the vitamin K epoxide reductase. PMID:24095214

  6. The effect of acenocoumarol on the antiplatelet effect of clopidogrel.

    PubMed

    Dewilde, Willem J M; Janssen, Paul W A; Bergmeijer, Thomas O; Kelder, Johannes C; Hackeng, Christian M; ten Berg, Jurriën M

    2015-10-01

    Patients exhibiting high on-clopidogrel platelet reactivity (HPR) are at an increased risk of atherothrombotic events following percutaneous coronary interventions (PCI). The use of concomitant medication which is metabolised by the hepatic cytochrome P450 system, such as phenprocoumon, is associated with HPR. We assessed the level of platelet reactivity on clopidogrel in patients who received concomitant treatment with acenocoumarol (another coumarin derivative). Patients scheduled for PCI were included in a prospective, single centre, observational registry. Patients who were adequately pre-treated with clopidogrel were eligible for this analysis, which included 1,582 patients, of whom 104 patients (6.6%) received concomitant acenocoumarol treatment. Platelet reactivity, as measured with the VerifyNow P2Y12 assay and expressed in P2Y12 Reaction Units (PRU), was significantly higher in patients on concomitant acenocoumarol treatment (mean PRU 229 ± 88 vs 187 ± 95; p < 0.001). In patients with concomitant acenocoumarol use, the proportion of patients with HPR was higher, defined as PRU > 208 (57.7% vs 41.1%; p=0.001) and PRU ≥ 236 (49.0% vs 31.4%; p< 0.001). In multivariable analysis, concomitant acenocoumarol use was independently associated with a higher PRU and the occurrence of HPR defined as PRU ≥ 236 (OR 2.00, [1.07-3.79]), but not with HPR defined as PRU > 208 (OR 1.37, [0.74-2.54]). PRU also was significantly increased after 1:1 propensity matching (+28.2; p < 0.001). As this was an observational study, confounding by indication cannot be excluded, although multivariable analyses and propensity matching were performed. The impact of the findings from this hypothesis-generating study on clinical outcome requires further investigation. PMID:26177793

  7. Development and Comparison of Warfarin Dosing Algorithms in Stroke Patients

    PubMed Central

    Cho, Sun-Mi; Lee, Kyung-Yul; Choi, Jong Rak

    2016-01-01

    Purpose The genes for cytochrome P450 2C9 (CYP2C9) and vitamin K epoxide reductase complex subunit 1 (VKORC1) have been identified as important genetic determinants of warfarin dosing and have been studied. We developed warfarin algorithm for Korean patients with stroke and compared the accuracy of warfarin dose prediction algorithms based on the pharmacogenetics. Materials and Methods A total of 101 patients on stable maintenance dose of warfarin were enrolled. Warfarin dosing algorithm was developed using multiple linear regression analysis. The performance of all the algorithms was characterized with coefficient of determination, determined by linear regression, and the mean of percent deviation was used to predict doses from the actual dose. In addition, we compared the performance of the algorithms using percentage of predicted dose falling within ±20% of clinically observed doses and dividing the patients into a low-dose group (≤3 mg/day), an intermediate-dose group (3–7 mg/day), and high-dose group (≥7 mg/day). Results A new developed algorithms including the variables of age, body weight, and CYP2C9 and VKORC1 genotype. Our algorithm accounted for 51% of variation in the warfarin stable dose, and performed best in predicting dose within 20% of actual dose and intermediate-dose group. Conclusion Our warfarin dosing algorithm may be useful for Korean patients with stroke. Further studies to elucidate clinical utility of genotype-guided dosing and find the additional genetic association are necessary. PMID:26996562

  8. Pharmacogenetic-guided Warfarin Dosing Algorithm in African-Americans.

    PubMed

    Alzubiedi, Sameh; Saleh, Mohammad I

    2016-01-01

    We aim to develop warfarin dosing algorithm for African-Americans. We explored demographic, clinical, and genetic data from a previously collected cohort of 163 African-American patients with a stable warfarin dose. We explored 2 approaches to develop the algorithm: multiple linear regression and artificial neural network (ANN). The clinical significance of the 2 dosing algorithms was evaluated by calculating the percentage of patients whose predicted dose of warfarin was within 20% of the actual dose. Linear regression model and ANN model predicted the ideal dose in 52% and 48% of the patients, respectively. The mean absolute error using linear regression model was estimated to be 10.8 mg compared with 10.9 mg using ANN. Linear regression and ANN models identified several predictors of warfarin dose including age, weight, CYP2C9 genotype *1/*1, VKORC1 genotype, rs12777823 genotype, rs2108622 genotype, congestive heart failure, and amiodarone use. In conclusion, we developed a warfarin dosing algorithm for African-Americans. The proposed dosing algorithm has the potential to recommend warfarin doses that are close to the appropriate doses. The use of more sophisticated ANN approach did not result in improved predictive performance of the dosing algorithm except for patients of a dose of ≥49 mg/wk. PMID:26355760

  9. Effectiveness and safety of dabigatran versus acenocoumarol in ‘real-world’ patients with atrial fibrillation

    PubMed Central

    Korenstra, Jennie; Wijtvliet, E. Petra J.; Veeger, Nic J.G.M.; Geluk, Christiane A.; Bartels, G. Louis; Posma, Jan L.; Piersma-Wichers, Margriet; Van Gelder, Isabelle C.; Rienstra, Michiel; Tieleman, Robert G.

    2016-01-01

    Aims Randomized trials showed non-inferior or superior results of the non-vitamin-K-antagonist oral anticoagulants (NOACs) compared with warfarin. The aim of this study was to assess the effectiveness and safety of dabigatran (direct thrombin inhibitor) vs. acenocoumarol (vitamin K antagonist) in patients with atrial fibrillation (AF) in daily clinical practice. Methods and results In this observational study, we evaluated all consecutive patients who started anticoagulation because of AF in our outpatient clinic from 2010 to 2013. Data were collected from electronic patient charts. Primary outcomes were stroke or systemic embolism and major bleeding. Propensity score matching was applied to address the non-randomized design. In total, 920 consecutive AF patients were enrolled (442 dabigatran, 478 acenocoumarol), of which 2 × 383 were available for analysis after propensity score matching. Mean follow-up duration was 1.5 ± 0.56 year. The mean calculated stroke risk according to the CHA2DS2-VASc score was 3.5%/year in dabigatran vs. 3.7%/year acenocoumarol-treated patients. The actual incidence rate of stroke or systemic embolism was 0.8%/year [95% confidence interval (CI): 0.2–2.1] vs. 1.0%/year (95% CI: 0.4–2.1), respectively. Multivariable analysis confirmed this lower but non-significant risk in dabigatran vs. acenocoumarol after adjustment for the CHA2DS2-VASc score [hazard ratio (HR)dabigatran = 0.72, 95% CI: 0.20–2.63, P = 0.61]. According to the HAS-BLED score, the mean calculated bleeding risk was 1.7%/year in both groups. Actual incidence rate of major bleeding was 2.1%/year (95% CI: 1.0–3.8) in the dabigatran vs. 4.3%/year (95% CI: 2.9–6.2) in acenocoumarol. This over 50% reduction remained significant after adjustment for the HAS-BLED score (HRdabigatran = 0.45, 95% CI: 0.22–0.93, P = 0.031). Conclusion In ‘real-world’ patients with AF, dabigatran appears to be as effective, but significantly safer than acenocoumarol. PMID:26843571

  10. Genetic algorithm dose minimization for an operational layout.

    SciTech Connect

    McLawhorn, S. L.; Kornreich, D. E.; Dudziak, Donald J.

    2002-01-01

    In an effort to reduce the dose to operating technicians performing fixed-time procedures on encapsulated source material, a program has been developed to optimize the layout of workstations within a facility by use of a genetic algorithm. Taking into account the sources present at each station and the time required to complete each procedure, the program utilizes a point kernel dose calculation tool for dose estimates. The genetic algorithm driver employs the dose calculation code as a cost function to determine the optimal spatial arrangement of workstations to minimize the total worker dose.

  11. A TLD dose algorithm using artificial neural networks

    SciTech Connect

    Moscovitch, M.; Rotunda, J.E.; Tawil, R.A.; Rathbone, B.A.

    1995-12-31

    An artificial neural network was designed and used to develop a dose algorithm for a multi-element thermoluminescence dosimeter (TLD). The neural network architecture is based on the concept of functional links network (FLN). Neural network is an information processing method inspired by the biological nervous system. A dose algorithm based on neural networks is fundamentally different as compared to conventional algorithms, as it has the capability to learn from its own experience. The neural network algorithm is shown the expected dose values (output) associated with given responses of a multi-element dosimeter (input) many times. The algorithm, being trained that way, eventually is capable to produce its own unique solution to similar (but not exactly the same) dose calculation problems. For personal dosimetry, the output consists of the desired dose components: deep dose, shallow dose and eye dose. The input consists of the TL data obtained from the readout of a multi-element dosimeter. The neural network approach was applied to the Harshaw Type 8825 TLD, and was shown to significantly improve the performance of this dosimeter, well within the U.S. accreditation requirements for personnel dosimeters.

  12. Impact of dose calculation algorithm on radiation therapy

    PubMed Central

    Chen, Wen-Zhou; Xiao, Ying; Li, Jun

    2014-01-01

    The quality of radiation therapy depends on the ability to maximize the tumor control probability while minimize the normal tissue complication probability. Both of these two quantities are directly related to the accuracy of dose distributions calculated by treatment planning systems. The commonly used dose calculation algorithms in the treatment planning systems are reviewed in this work. The accuracy comparisons among these algorithms are illustrated by summarizing the highly cited research papers on this topic. Further, the correlation between the algorithms and tumor control probability/normal tissue complication probability values are manifested by several recent studies from different groups. All the cases demonstrate that dose calculation algorithms play a vital role in radiation therapy. PMID:25431642

  13. A proposed dosing algorithm for the individualized dosing of human immunoglobulin in chronic inflammatory neuropathies.

    PubMed

    Lunn, Michael P; Ellis, Lauren; Hadden, Robert D; Rajabally, Yusuf A; Winer, John B; Reilly, Mary M

    2016-03-01

    Dosing guidelines for immunoglobulin (Ig) treatment in neurological disorders do not consider variations in Ig half-life or between patients. Individualization of therapy could optimize clinical outcomes and help control costs. We developed an algorithm to optimize Ig dose based on patient's response and present this here as an example of how dosing might be individualized in a pharmacokinetically rational way and how this achieves potential dose and cost savings. Patients are "normalized" with no more than two initial doses of 2 g/kg, identifying responders. A third dose is not administered until the patient's condition deteriorates, allowing a "dose interval" to be set. The dose is then reduced until relapse allowing dose optimization. Using this algorithm, we have individualized Ig doses for 71 chronic inflammatory neuropathy patients. The majority of patients had chronic inflammatory demyelinating polyradiculoneuropathy (n = 39) or multifocal motor neuropathy (n = 24). The mean (standard deviation) dose of Ig administered was 1.4 (0.6) g/kg, with a mean dosing interval of 4.3 weeks (median 4 weeks, range 0.5-10). Use of our standardized algorithm has allowed us to quickly optimize Ig dosing. PMID:26757367

  14. Dosing Algorithms to Predict Warfarin Maintenance Dose in Caucasians and African Americans

    PubMed Central

    Schelleman, Hedi; Chen, Jinbo; Chen, Zhen; Christie, Jason; Newcomb, Craig W.; Brensinger, Colleen M.; Price, Maureen; Whitehead, Alexander S.; Kealey, Carmel; Thorn, Caroline F.; Samaha, Frederick F.; Kimmel, Stephen E

    2008-01-01

    Objectives The objective of this study was to determine whether clinical, environmental, and genetic factors can be used to develop dosing algorithms for Caucasians and African Americans that perform better than giving empirical 5 mg/day. Methods From April 2002 through December 2005, 259 warfarin initiators were prospectively followed until they reached maintenance dose. Results The Caucasian algorithm included 11 variables (R2=0.43). This model (51% within 1 mg) performed better compared with 5 mg/day (29% within 5±1 mg). The African American algorithm included 10 variables (R2=0.28). This model predicted 37% of doses within 1 mg of the observed dose; a small improvement compared with 5 mg/day (34%). These results were similar to the results we obtained from testing other (published) algorithms. Conclusions The dosing algorithms in Caucasians explained <45% of the variability and the algorithms in African Americans performed only marginally better than giving 5 mg empirically. PMID:18596683

  15. [The effect of prolonged acenocoumarol therapy on bone density].

    PubMed

    Kiss, J; Tihanyi, L; Nagy, E; Végh, Z; Deli, A; Tahy, A; Korányi, L

    1995-09-24

    The effect of chronic cumarin treatment on bone mineral content was investigated. Bone mineral density was determined by double photon densitometry (Lunar DPXL). The density data (mean +/- SE) of 45 cardiac patients (age: 57.0 = +/- 6.3 y, body mass index: 26.7 +/- 3.8 kp/m2, cardiac stadium score, according to New York Heart Association: 2-3), had been treated by acenocumarol at least for 2 years (duration of treatment: 75.0 +/- 52 months), were compared to the values of 45 age, body mass index, cardiac status matched patients not treated by anticoagulant. The density values of L2-L4 lumbar regions were lower in the treated group (1.041 +/- 0.17 vs. controlls: 1.13 +/- 0.15 g/cm2, p < 0.05), while no differences in ultradistal ulnar and radial regions were detected. No correlation between bone mineral density and the length, or the dose of the cumarin treatment were observed. This observation suggests the importance of the regular bone densitometry control of cumarin treated patient. PMID:7566945

  16. Fast dose algorithm for generation of dose coverage probability for robustness analysis of fractionated radiotherapy

    NASA Astrophysics Data System (ADS)

    Tilly, David; Ahnesjö, Anders

    2015-07-01

    A fast algorithm is constructed to facilitate dose calculation for a large number of randomly sampled treatment scenarios, each representing a possible realisation of a full treatment with geometric, fraction specific displacements for an arbitrary number of fractions. The algorithm is applied to construct a dose volume coverage probability map (DVCM) based on dose calculated for several hundred treatment scenarios to enable the probabilistic evaluation of a treatment plan. For each treatment scenario, the algorithm calculates the total dose by perturbing a pre-calculated dose, separately for the primary and scatter dose components, for the nominal conditions. The ratio of the scenario specific accumulated fluence, and the average fluence for an infinite number of fractions is used to perturb the pre-calculated dose. Irregularities in the accumulated fluence may cause numerical instabilities in the ratio, which is mitigated by regularisation through convolution with a dose pencil kernel. Compared to full dose calculations the algorithm demonstrates a speedup factor of ~1000. The comparisons to full calculations show a 99% gamma index (2%/2 mm) pass rate for a single highly modulated beam in a virtual water phantom subject to setup errors during five fractions. The gamma comparison shows a 100% pass rate in a moving tumour irradiated by a single beam in a lung-like virtual phantom. DVCM iso-probability lines computed with the fast algorithm, and with full dose calculation for each of the fractions, for a hypo-fractionated prostate case treated with rotational arc therapy treatment were almost indistinguishable.

  17. Fast dose algorithm for generation of dose coverage probability for robustness analysis of fractionated radiotherapy.

    PubMed

    Tilly, David; Ahnesjö, Anders

    2015-07-21

    A fast algorithm is constructed to facilitate dose calculation for a large number of randomly sampled treatment scenarios, each representing a possible realisation of a full treatment with geometric, fraction specific displacements for an arbitrary number of fractions. The algorithm is applied to construct a dose volume coverage probability map (DVCM) based on dose calculated for several hundred treatment scenarios to enable the probabilistic evaluation of a treatment plan.For each treatment scenario, the algorithm calculates the total dose by perturbing a pre-calculated dose, separately for the primary and scatter dose components, for the nominal conditions. The ratio of the scenario specific accumulated fluence, and the average fluence for an infinite number of fractions is used to perturb the pre-calculated dose. Irregularities in the accumulated fluence may cause numerical instabilities in the ratio, which is mitigated by regularisation through convolution with a dose pencil kernel.Compared to full dose calculations the algorithm demonstrates a speedup factor of ~1000. The comparisons to full calculations show a 99% gamma index (2%/2 mm) pass rate for a single highly modulated beam in a virtual water phantom subject to setup errors during five fractions. The gamma comparison shows a 100% pass rate in a moving tumour irradiated by a single beam in a lung-like virtual phantom. DVCM iso-probability lines computed with the fast algorithm, and with full dose calculation for each of the fractions, for a hypo-fractionated prostate case treated with rotational arc therapy treatment were almost indistinguishable. PMID:26118844

  18. Validation of a dose warping algorithm using clinically realistic scenarios

    PubMed Central

    Dehghani, H; Green, S; Webster, G J

    2015-01-01

    Objective: Dose warping following deformable image registration (DIR) has been proposed for interfractional dose accumulation. Robust evaluation workflows are vital to clinically implement such procedures. This study demonstrates such a workflow and quantifies the accuracy of a commercial DIR algorithm for this purpose under clinically realistic scenarios. Methods: 12 head and neck (H&N) patient data sets were used for this retrospective study. For each case, four clinically relevant anatomical changes have been manually generated. Dose distributions were then calculated on each artificially deformed image and warped back to the original anatomy following DIR by a commercial algorithm. Spatial registration was evaluated by quantitative comparison of the original and warped structure sets, using conformity index and mean distance to conformity (MDC) metrics. Dosimetric evaluation was performed by quantitative comparison of the dose–volume histograms generated for the calculated and warped dose distributions, which should be identical for the ideal “perfect” registration of mass-conserving deformations. Results: Spatial registration of the artificially deformed image back to the planning CT was accurate (MDC range of 1–2 voxels or 1.2–2.4 mm). Dosimetric discrepancies introduced by the DIR were low (0.02 ± 0.03 Gy per fraction in clinically relevant dose metrics) with no statistically significant difference found (Wilcoxon test, 0.6 ≥ p ≥ 0.2). Conclusion: The reliability of CT-to-CT DIR-based dose warping and image registration was demonstrated for a commercial algorithm with H&N patient data. Advances in knowledge: This study demonstrates a workflow for validation of dose warping following DIR that could assist physicists and physicians in quantifying the uncertainties associated with dose accumulation in clinical scenarios. PMID:25791569

  19. Verification of IMRT dose calculations using AAA and PBC algorithms in dose buildup regions.

    PubMed

    Oinam, Arun S; Singh, Lakhwant

    2010-01-01

    The purpose of this comparative study was to test the accuracy of anisotropic analytical algorithm (AAA) and pencil beam convolution (PBC) algorithms of Eclipse treatment planning system (TPS) for dose calculations in the low- and high-dose buildup regions. AAA and PBC algorithms were used to create two intensity-modulated radiotherapy (IMRT) plans of the same optimal fluence generated from a clinically simulated oropharynx case in an in-house fabricated head and neck phantom. The TPS computed buildup doses were compared with the corresponding measured doses in the phantom using thermoluminescence dosimeters (TLD 100). Analysis of dose distribution calculated using PBC and AAA shows an increase in gamma value in the dose buildup region indicating large dose deviation. For the surface areas of 1, 50 and 100 cm2, PBC overestimates doses as compared to AAA calculated value in the range of 1.34%-3.62% at 0.6 cm depth, 1.74%-2.96% at 0.4 cm depth, and 1.96%-4.06% at 0.2 cm depth, respectively. In high-dose buildup region, AAA calculated doses were lower by an average of -7.56% (SD = 4.73%), while PBC was overestimated by 3.75% (SD = 5.70%) as compared to TLD measured doses at 0.2 cm depth. However, at 0.4 and 0.6 cm depth, PBC overestimated TLD measured doses by 5.84% (SD = 4.38%) and 2.40% (SD = 4.63%), respectively, while AAA underestimated the TLD measured doses by -0.82% (SD = 4.24%) and -1.10% (SD = 4.14%) at the same respective depth. In low-dose buildup region, both AAA and PBC overestimated the TLD measured doses at all depths except -2.05% (SD = 10.21%) by AAA at 0.2 cm depth. The differences between AAA and PBC at all depths were statistically significant (p < 0.05) in high-dose buildup region, whereas it is not statistically significant in low-dose buildup region. In conclusion, AAA calculated the dose more accurately than PBC in clinically important high-dose buildup region at 0.4 cm and 0.6 cm depths. The use of an orfit cast increases the dose buildup

  20. Linear vs. function-based dose algorithm designs.

    PubMed

    Stanford, N

    2011-03-01

    The performance requirements prescribed in IEC 62387-1, 2007 recommend linear, additive algorithms for external dosimetry [IEC. Radiation protection instrumentation--passive integrating dosimetry systems for environmental and personal monitoring--Part 1: General characteristics and performance requirements. IEC 62387-1 (2007)]. Neither of the two current standards for performance of external dosimetry in the USA address the additivity of dose results [American National Standards Institute, Inc. American National Standard for dosimetry personnel dosimetry performance criteria for testing. ANSI/HPS N13.11 (2009); Department of Energy. Department of Energy Standard for the performance testing of personnel dosimetry systems. DOE/EH-0027 (1986)]. While there are significant merits to adopting a purely linear solution to estimating doses from multi-element external dosemeters, differences in the standards result in technical as well as perception challenges in designing a single algorithm approach that will satisfy both IEC and USA external dosimetry performance requirements. The dosimetry performance testing standards in the USA do not incorporate type testing, but rely on biennial performance tests to demonstrate proficiency in a wide range of pure and mixed fields. The test results are used exclusively to judge the system proficiency, with no specific requirements on the algorithm design. Technical challenges include mixed beta/photon fields with a beta dose as low as 0.30 mSv mixed with 0.05 mSv of low-energy photons. Perception-based challenges, resulting from over 20 y of experience with this type of performance testing in the USA, include the common belief that the overall quality of the dosemeter performance can be judged from performance to pure fields. This paper presents synthetic testing results from currently accredited function-based algorithms and new developed purely linear algorithms. A comparison of the performance data highlights the benefits of each

  1. A correction-based dose calculation algorithm for kilovoltage x rays

    SciTech Connect

    Ding, George X.; Pawlowski, Jason M.; Coffey, Charles W.

    2008-12-15

    Frequent and repeated imaging procedures such as those performed in image-guided radiotherapy (IGRT) programs may add significant dose to radiosensitive organs of radiotherapy patients. It has been shown that kV-CBCT results in doses to bone that are up to a factor of 3-4 higher than those in surrounding soft tissue. Imaging guidance procedures are necessary due to their potential benefits, but the additional incremental dose per treatment fraction may exceed an individual organ tolerance. Hence it is important to manage and account for this additional dose from imaging for radiotherapy patients. Currently available model-based dose calculation methods in radiation treatment planning (RTP) systems are not suitable for low-energy x rays, and new and fast calculation algorithms are needed for a RTP system for kilovoltage dose computations. This study presents a new dose calculation algorithm, referred to as the medium-dependent-correction (MDC) algorithm, for accurate patient dose calculation resulting from kilovoltage x rays. The accuracy of the new algorithm is validated against Monte Carlo calculations. The new algorithm overcomes the deficiency of existing density correction based algorithms in dose calculations for inhomogeneous media, especially for CT-based human volumetric images used in radiotherapy treatment planning.

  2. Comparison of dose calculation algorithms for colorectal cancer brachytherapy treatment with a shielded applicator

    SciTech Connect

    Yan Xiangsheng; Poon, Emily; Reniers, Brigitte; Vuong, Te; Verhaegen, Frank

    2008-11-15

    Colorectal cancer patients are treated at our hospital with {sup 192}Ir high dose rate (HDR) brachytherapy using an applicator that allows the introduction of a lead or tungsten shielding rod to reduce the dose to healthy tissue. The clinical dose planning calculations are, however, currently performed without taking the shielding into account. To study the dose distributions in shielded cases, three techniques were employed. The first technique was to adapt a shielding algorithm which is part of the Nucletron PLATO HDR treatment planning system. The isodose pattern exhibited unexpected features but was found to be a reasonable approximation. The second technique employed a ray tracing algorithm that assigns a constant dose ratio with/without shielding behind the shielding along a radial line originating from the source. The dose calculation results were similar to the results from the first technique but with improved accuracy. The third and most accurate technique used a dose-matrix-superposition algorithm, based on Monte Carlo calculations. The results from the latter technique showed quantitatively that the dose to healthy tissue is reduced significantly in the presence of shielding. However, it was also found that the dose to the tumor may be affected by the presence of shielding; for about a quarter of the patients treated the volume covered by the 100% isodose lines was reduced by more than 5%, leading to potential tumor cold spots. Use of any of the three shielding algorithms results in improved dose estimates to healthy tissue and the tumor.

  3. Pediatric chest HRCT using the iDose4 Hybrid Iterative Reconstruction Algorithm: Which iDose level to choose?

    NASA Astrophysics Data System (ADS)

    Smarda, M.; Alexopoulou, E.; Mazioti, A.; Kordolaimi, S.; Ploussi, A.; Priftis, K.; Efstathopoulos, E.

    2015-09-01

    Purpose of the study is to determine the appropriate iterative reconstruction (IR) algorithm level that combines image quality and diagnostic confidence, for pediatric patients undergoing high-resolution computed tomography (HRCT). During the last 2 years, a total number of 20 children up to 10 years old with a clinical presentation of chronic bronchitis underwent HRCT in our department's 64-detector row CT scanner using the iDose IR algorithm, with almost similar image settings (80kVp, 40-50 mAs). CT images were reconstructed with all iDose levels (level 1 to 7) as well as with filtered-back projection (FBP) algorithm. Subjective image quality was evaluated by 2 experienced radiologists in terms of image noise, sharpness, contrast and diagnostic acceptability using a 5-point scale (1=excellent image, 5=non-acceptable image). Artifacts existance was also pointed out. All mean scores from both radiologists corresponded to satisfactory image quality (score ≤3), even with the FBP algorithm use. Almost excellent (score <2) overall image quality was achieved with iDose levels 5 to 7, but oversmoothing artifacts appearing with iDose levels 6 and 7 affected the diagnostic confidence. In conclusion, the use of iDose level 5 enables almost excellent image quality without considerable artifacts affecting the diagnosis. Further evaluation is needed in order to draw more precise conclusions.

  4. Novel lung IMRT planning algorithms with nonuniform dose delivery strategy to account for respiratory motion.

    PubMed

    Li, Xiang; Zhang, Pengpeng; Mah, Dennis; Gewanter, Richard; Kutcher, Gerald

    2006-09-01

    To effectively deliver radiation dose to lung tumors, respiratory motion has to be considered in treatment planning. In this paper we first present a new lung IMRT planning algorithm, referred as the dose shaping (DS) method, that shapes the dose distribution according to the probability distribution of the tumor over the breathing cycle to account for respiratory motion. In IMRT planning a dose-based convolution method was generally adopted to compensate for random organ motion by performing 4-D dose calculations using a tumor motion probability density function. We modified the CON-DOSE method to a dose volume histogram based convolution method (CON-DVH) that allows nonuniform dose distribution to account for respiratory motion. We implemented the two new planning algorithms on an in-house IMRT planning system that uses the Eclipse (Varian, Palo Alto, CA) planning workstation as the dose calculation engine. The new algorithms were compared with (1) the conventional margin extension approach in which margin is generated based on the extreme positions of the tumor, (2) the dose-based convolution method, and (3) gating with 3 mm residual motion. Dose volume histogram, tumor control probability, normal tissue complication probability, and mean lung dose were calculated and used to evaluate the relative performance of these approaches at the end-exhale phase of the respiratory cycle. We recruited six patients in our treatment planning study. The study demonstrated that the two new methods could significantly reduce the ipsilateral normal lung dose and outperformed the margin extension method and the dose-based convolution method. Compared with the gated approach that has the best performance in the low dose region, the two methods we proposed have similar potential to escalate tumor dose, but could be more efficient because dose is delivered continuously. PMID:17022235

  5. Towards Rational Dosing Algorithms for Vancomycin in Neonates and Infants Based on Population Pharmacokinetic Modeling

    PubMed Central

    Janssen, Esther J. H.; Välitalo, Pyry A. J.; Allegaert, Karel; de Cock, Roosmarijn F. W.; Simons, Sinno H. P.; Sherwin, Catherine M. T.; van den Anker, Johannes N.

    2015-01-01

    Because of the recent awareness that vancomycin doses should aim to meet a target area under the concentration-time curve (AUC) instead of trough concentrations, more aggressive dosing regimens are warranted also in the pediatric population. In this study, both neonatal and pediatric pharmacokinetic models for vancomycin were externally evaluated and subsequently used to derive model-based dosing algorithms for neonates, infants, and children. For the external validation, predictions from previously published pharmacokinetic models were compared to new data. Simulations were performed in order to evaluate current dosing regimens and to propose a model-based dosing algorithm. The AUC/MIC over 24 h (AUC24/MIC) was evaluated for all investigated dosing schedules (target of >400), without any concentration exceeding 40 mg/liter. Both the neonatal and pediatric models of vancomycin performed well in the external data sets, resulting in concentrations that were predicted correctly and without bias. For neonates, a dosing algorithm based on body weight at birth and postnatal age is proposed, with daily doses divided over three to four doses. For infants aged <1 year, doses between 32 and 60 mg/kg/day over four doses are proposed, while above 1 year of age, 60 mg/kg/day seems appropriate. As the time to reach steady-state concentrations varies from 155 h in preterm infants to 36 h in children aged >1 year, an initial loading dose is proposed. Based on the externally validated neonatal and pediatric vancomycin models, novel dosing algorithms are proposed for neonates and children aged <1 year. For children aged 1 year and older, the currently advised maintenance dose of 60 mg/kg/day seems appropriate. PMID:26643337

  6. TVA's dose algorithm for Panasonic type 802 TLDs

    SciTech Connect

    Colvett, R.D.; Gupta, V.P.; Hudson, C.G. )

    1988-10-01

    The TVA algorithm for interpreting readings from Panasonic type 802 multi-element TLDs uses a calculational method similar to that for unfolding neutron spectra. A response matrix is constructed for the four elements in the Panasonic TLD based on tests performed in a variety of single-component radiation fields. The matrix is then used to unfold the responses of the elements when the dosimeter is exposed to mixed radiation fields. In this paper the response metric and the calculational method are described in detail, and test results are presented that verify the algorithm's effectiveness.

  7. Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-03-01

    The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.

  8. Development of new two-dosimeter algorithm for effective dose in ICRP Publication 103.

    PubMed

    Kim, Chan Hyeong; Cho, Sungkoo; Jeong, Jong Hwi; Bolch, Wesley E; Reece, Warren D; Poston, John W

    2011-05-01

    The two-dosimeter method, which employs one dosimeter on the chest and the other on the back, determines the effective dose with sufficient accuracy for complex or unknown irradiation geometries. The two-dosimeter method, with a suitable algorithm, neither significantly overestimates (in most cases) nor seriously underestimates the effective dose, not even for extreme exposure geometries. Recently, however, the definition of the effective dose itself was changed in ICRP Publication 103; that is, the organ and tissue configuration employed in calculations of effective dose, along with the related tissue weighting factors, was significantly modified. In the present study, therefore, a two-dosimeter algorithm was developed for the new ICRP 103 definition of effective dose. To that end, first, effective doses and personal dosimeter responses were calculated using the ICRP reference phantoms and the MCNPX code for many incident beam directions. Next, a systematic analysis of the calculated values was performed to determine an optimal algorithm. Finally, the developed algorithm was tested by applying it to beam irradiation geometries specifically selected as extreme exposure geometries, and the results were compared with those for the previous algorithm that had been developed for the effective dose given in ICRP Publication 60. PMID:21451315

  9. Comprehensive evaluation and clinical implementation of commercially available Monte Carlo dose calculation algorithm.

    PubMed

    Zhang, Aizhen; Wen, Ning; Nurushev, Teamour; Burmeister, Jay; Chetty, Indrin J

    2013-01-01

    A commercial electron Monte Carlo (eMC) dose calculation algorithm has become available in Eclipse treatment planning system. The purpose of this work was to evaluate the eMC algorithm and investigate the clinical implementation of this system. The beam modeling of the eMC algorithm was performed for beam energies of 6, 9, 12, 16, and 20 MeV for a Varian Trilogy and all available applicator sizes in the Eclipse treatment planning system. The accuracy of the eMC algorithm was evaluated in a homogeneous water phantom, solid water phantoms containing lung and bone materials, and an anthropomorphic phantom. In addition, dose calculation accuracy was compared between pencil beam (PB) and eMC algorithms in the same treatment planning system for heterogeneous phantoms. The overall agreement between eMC calculations and measurements was within 3%/2 mm, while the PB algorithm had large errors (up to 25%) in predicting dose distributions in the presence of inhomogeneities such as bone and lung. The clinical implementation of the eMC algorithm was investigated by performing treatment planning for 15 patients with lesions in the head and neck, breast, chest wall, and sternum. The dose distributions were calculated using PB and eMC algorithms with no smoothing and all three levels of 3D Gaussian smoothing for comparison. Based on a routine electron beam therapy prescription method, the number of eMC calculated monitor units (MUs) was found to increase with increased 3D Gaussian smoothing levels. 3D Gaussian smoothing greatly improved the visual usability of dose distributions and produced better target coverage. Differences of calculated MUs and dose distributions between eMC and PB algorithms could be significant when oblique beam incidence, surface irregularities, and heterogeneous tissues were present in the treatment plans. In our patient cases, monitor unit differences of up to 7% were observed between PB and eMC algorithms. Monitor unit calculations were also preformed

  10. Evaluation of six TPS algorithms in computing entrance and exit doses.

    PubMed

    Tan, Yun I; Metwaly, Mohamed; Glegg, Martin; Baggarley, Shaun; Elliott, Alex

    2014-01-01

    Entrance and exit doses are commonly measured in in vivo dosimetry for comparison with expected values, usually generated by the treatment planning system (TPS), to verify accuracy of treatment delivery. This report aims to evaluate the accuracy of six TPS algorithms in computing entrance and exit doses for a 6 MV beam. The algorithms tested were: pencil beam convolution (Eclipse PBC), analytical anisotropic algorithm (Eclipse AAA), AcurosXB (Eclipse AXB), FFT convolution (XiO Convolution), multigrid superposition (XiO Superposition), and Monte Carlo photon (Monaco MC). Measurements with ionization chamber (IC) and diode detector in water phantoms were used as a reference. Comparisons were done in terms of central axis point dose, 1D relative profiles, and 2D absolute gamma analysis. Entrance doses computed by all TPS algorithms agreed to within 2% of the measured values. Exit doses computed by XiO Convolution, XiO Superposition, Eclipse AXB, and Monaco MC agreed with the IC measured doses to within 2%-3%. Meanwhile, Eclipse PBC and Eclipse AAA computed exit doses were higher than the IC measured doses by up to 5.3% and 4.8%, respectively. Both algorithms assume that full backscatter exists even at the exit level, leading to an overestimation of exit doses. Despite good agreements at the central axis for Eclipse AXB and Monaco MC, 1D relative comparisons showed profiles mismatched at depths beyond 11.5 cm. Overall, the 2D absolute gamma (3%/3 mm) pass rates were better for Monaco MC, while Eclipse AXB failed mostly at the outer 20% of the field area. The findings of this study serve as a useful baseline for the implementation of entrance and exit in vivo dosimetry in clinical departments utilizing any of these six common TPS algorithms for reference comparison. PMID:24892349

  11. Dose-calculation algorithms in the context of inhomogeneity corrections for high energy photon beams

    SciTech Connect

    Papanikolaou, Niko; Stathakis, Sotirios

    2009-10-15

    Radiation therapy has witnessed a plethora of innovations and developments in the past 15 years. Since the introduction of computed tomography for treatment planning there has been a steady introduction of new methods to refine treatment delivery. Imaging continues to be an integral part of the planning, but also the delivery, of modern radiotherapy. However, all the efforts of image guided radiotherapy, intensity-modulated planning and delivery, adaptive radiotherapy, and everything else that we pride ourselves in having in the armamentarium can fall short, unless there is an accurate dose-calculation algorithm. The agreement between the calculated and delivered doses is of great significance in radiation therapy since the accuracy of the absorbed dose as prescribed determines the clinical outcome. Dose-calculation algorithms have evolved greatly over the years in an effort to be more inclusive of the effects that govern the true radiation transport through the human body. In this Vision 20/20 paper, we look back to see how it all started and where things are now in terms of dose algorithms for photon beams and the inclusion of tissue heterogeneities. Convolution-superposition algorithms have dominated the treatment planning industry for the past few years. Monte Carlo techniques have an inherent accuracy that is superior to any other algorithm and as such will continue to be the gold standard, along with measurements, and maybe one day will be the algorithm of choice for all particle treatment planning in radiation therapy.

  12. Comparison of selected dose calculation algorithms in radiotherapy treatment planning for tissues with inhomogeneities

    NASA Astrophysics Data System (ADS)

    Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.

    2016-03-01

    Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.

  13. Dose algorithm determination for the Los Alamos National Laboratory personnel dosimetry system

    SciTech Connect

    Patterson, J.M.

    1995-12-31

    One of the most important aspects of a TLD dosimetry system is the dose algorithm used to convert the signals from the badge reader to an estimate of a worker`s dose. It is now more important then ever to have an accurate algorithm to estimate dose well below regulatory limits. Dosimetry systems for DOE laboratories must meet minimum performance standards based on DOELAP criteria. The purpose of this paper is to describe the development of a dose algorithm for a new TLD dosimeter that has been developed at Los Alamos National Laboratories. It is expected that DOELAP testing will start in 1995. Initial results indicate that the system will be able to exceed the minimum performance criteria by a large margin. The enhanced ability of the dosimeter to determine beta, gamma, and neutron energies makes it very useful in the various radiation fields encountered at the laboratory.

  14. Optimal dosing of warfarin and other coumarin anticoagulants: the role of genetic polymorphisms.

    PubMed

    Daly, Ann K

    2013-03-01

    Coumarin anticoagulants, which include warfarin, acenocoumarol and phenprocoumon, are among the most widely prescribed drugs worldwide. There is now a large body of published data showing that genotype for certain common polymorphisms in the genes encoding the target vitamin K epoxide reductase (G-1639A/C1173T) and the main metabolizing enzyme CYP2C9 (CYP2C9*2 and *3 alleles) are important determinants of the individual coumarin anticoagulant dose requirement. Additional less common polymorphisms in these genes together with polymorphisms in other genes relevant to blood coagulation such as the cytochrome P450 CYP4F2, gamma-glutamyl carboxylase, calumenin and cytochrome P450 oxidoreductase may also be significant predictors of dose, especially in ethnic groups such as Africans where there have been fewer genetic studies compared with European populations. Using relevant genotypes to calculate starting dose may improve safety during the initiation period. Various algorithms for dose calculation, which also take patient age and other characteristics into consideration, have been developed for all three widely used coumarin anticoagulants and are now being tested in ongoing large randomised clinical trials. One recently completed study has provided encouraging results suggesting that calculation of warfarin dose on the basis of individual patient genotype leads to few adverse events and a higher proportion of time within the therapeutic coagulation rate window, but these findings still need confirmation. PMID:23376975

  15. The impact of photon dose calculation algorithms on expected dose distributions in lungs under different respiratory phases

    NASA Astrophysics Data System (ADS)

    Fogliata, Antonella; Nicolini, Giorgia; Vanetti, Eugenio; Clivio, Alessandro; Winkler, Peter; Cozzi, Luca

    2008-05-01

    A planning study was carried out on a cohort of CT datasets from breast patients scanned during different respiratory phases. The aim of the study was to investigate the influence of different air filling in lungs on the calculation accuracy of photon dose algorithms and to identify potential patterns of failure with clinical implications. Selected respiratory phases were free breathing (FB), representative of typical end expiration, and deep inspiration breath hold (DIBH), a typical condition for clinical treatment with respiratory gating. Algorithms investigated were the pencil beam (PBC), the anisotropic analytical algorithm (AAA) and the collapsed cone (CC) from the Varian Eclipse or Philips Pinnacle planning system. Reference benchmark calculations were performed with the Voxel Monte Carlo (VMC++). An analysis was performed in terms of physical quantities inspecting either dose-volume or dose-mass histograms and in terms of an extension to three dimensions of the γ index of Low. Results were stratified according to a breathing phase and algorithm. Collectives acquired in FB or DIBH showed well-separated average lung density distributions with mean densities of 0.27 ± 0.04 and 0.16 ± 0.02 g cm-3, respectively, and average peak densities of 0.17 ± 0.03 and 0.09 ± 0.02 g cm-3. Analysis of volume-dose or mass-dose histograms proved the expected deviations on PBC results due to the missing lateral transport of electrons with underestimations in the low dose region and overestimations in the high dose region. From the γ analysis, it resulted that PBC is systematically defective compared to VMC++ over the entire range of lung densities and dose levels with severe violations in both respiratory phases. The fraction of lung voxels with γ > 1 for PBC reached 25% in DIBH and about 15% in FB. CC and AAA performed, in contrast, similarly and with fractions of lung voxels with γ > 1 in average inferior to 2% in FB and 4-5% (AAA) or 6-8% (CC) in DIBH. In summary, PBC

  16. Independent absorbed-dose calculation using the Monte Carlo algorithm in volumetric modulated arc therapy

    PubMed Central

    2014-01-01

    Purpose To report the result of independent absorbed-dose calculations based on a Monte Carlo (MC) algorithm in volumetric modulated arc therapy (VMAT) for various treatment sites. Methods and materials All treatment plans were created by the superposition/convolution (SC) algorithm of SmartArc (Pinnacle V9.2, Philips). The beam information was converted into the format of the Monaco V3.3 (Elekta), which uses the X-ray voxel-based MC (XVMC) algorithm. The dose distribution was independently recalculated in the Monaco. The dose for the planning target volume (PTV) and the organ at risk (OAR) were analyzed via comparisons with those of the treatment plan. Before performing an independent absorbed-dose calculation, the validation was conducted via irradiation from 3 different gantry angles with a 10- × 10-cm2 field. For the independent absorbed-dose calculation, 15 patients with cancer (prostate, 5; lung, 5; head and neck, 3; rectal, 1; and esophageal, 1) who were treated with single-arc VMAT were selected. To classify the cause of the dose difference between the Pinnacle and Monaco TPSs, their calculations were also compared with the measurement data. Result In validation, the dose in Pinnacle agreed with that in Monaco within 1.5%. The agreement in VMAT calculations between Pinnacle and Monaco using phantoms was exceptional; at the isocenter, the difference was less than 1.5% for all the patients. For independent absorbed-dose calculations, the agreement was also extremely good. For the mean dose for the PTV in particular, the agreement was within 2.0% in all the patients; specifically, no large difference was observed for high-dose regions. Conversely, a significant difference was observed in the mean dose for the OAR. For patients with prostate cancer, the mean rectal dose calculated in Monaco was significantly smaller than that calculated in Pinnacle. Conclusions There was no remarkable difference between the SC and XVMC calculations in the high-dose regions

  17. SU-E-T-67: Clinical Implementation and Evaluation of the Acuros Dose Calculation Algorithm

    SciTech Connect

    Yan, C; Combine, T; Dickens, K; Wynn, R; Pavord, D; Huq, M

    2014-06-01

    Purpose: The main aim of the current study is to present a detailed description of the implementation of the Acuros XB Dose Calculation Algorithm, and subsequently evaluate its clinical impacts by comparing it with AAA algorithm. Methods: The source models for both Acuros XB and AAA were configured by importing the same measured beam data into Eclipse treatment planning system. Both algorithms were evaluated by comparing calculated dose with measured dose on a homogeneous water phantom for field sizes ranging from 6cm × 6cm to 40cm × 40cm. Central axis and off-axis points with different depths were chosen for the comparison. Similarly, wedge fields with wedge angles from 15 to 60 degree were used. In addition, variable field sizes for a heterogeneous phantom were used to evaluate the Acuros algorithm. Finally, both Acuros and AAA were tested on VMAT patient plans for various sites. Does distributions and calculation time were compared. Results: On average, computation time is reduced by at least 50% by Acuros XB compared with AAA on single fields and VMAT plans. When used for open 6MV photon beams on homogeneous water phantom, both Acuros XB and AAA calculated doses were within 1% of measurement. For 23 MV photon beams, the calculated doses were within 1.5% of measured doses for Acuros XB and 2% for AAA. When heterogeneous phantom was used, Acuros XB also improved on accuracy. Conclusion: Compared with AAA, Acuros XB can improve accuracy while significantly reduce computation time for VMAT plans.

  18. An algorithm to calculate a collapsed arc dose matrix in volumetric modulated arc therapy

    SciTech Connect

    Arumugam, Sankar; Xing Aitang; Jameson, Michael; Holloway, Lois

    2013-07-15

    Purpose: The delivery of volumetric modulated arc therapy (VMAT) is more complex than other conformal radiotherapy techniques. In this work, the authors present the feasibility of performing routine verification of VMAT delivery using a dose matrix measured by a gantry mounted 2D ion chamber array and corresponding dose matrix calculated by an inhouse developed algorithm.Methods: Pinnacle, v9.0, treatment planning system (TPS) was used in this study to generate VMAT plans for a 6 MV photon beam from an Elekta-Synergy linear accelerator. An algorithm was developed and implemented with inhouse computer code to calculate the dose matrix resulting from a VMAT arc in a plane perpendicular to the beam at isocenter. The algorithm was validated using measurement of standard patterns and clinical VMAT plans with a 2D ion chamber array. The clinical VMAT plans were also validated using ArcCHECK measurements. The measured and calculated dose matrices were compared using gamma ({gamma}) analysis with 3%/3 mm criteria and {gamma} tolerance of 1.Results: The dose matrix comparison of standard patterns has shown excellent agreement with the mean {gamma} pass rate 97.7 ({sigma}= 0.4)%. The validation of clinical VMAT plans using the dose matrix predicted by the algorithm and the corresponding measured dose matrices also showed good agreement with the mean {gamma} pass rate of 97.6 ({sigma}= 1.6)%. The validation of clinical VMAT plans using ArcCHECK measurements showed a mean pass rate of 95.6 ({sigma}= 1.8)%.Conclusions: The developed algorithm was shown to accurately predict the dose matrix, in a plane perpendicular to the beam, by considering all possible leaf trajectories in a VMAT delivery. This enables the verification of VMAT delivery using a 2D array detector mounted on a treatment head.

  19. Specification of absorbed dose to water using model-based dose calculation algorithms for treatment planning in brachytherapy

    NASA Astrophysics Data System (ADS)

    Carlsson Tedgren, Åsa; Alm Carlsson, Gudrun

    2013-04-01

    Model-based dose calculation algorithms (MBDCAs), recently introduced in treatment planning systems (TPS) for brachytherapy, calculate tissue absorbed doses. In the TPS framework, doses have hereto been reported as dose to water and water may still be preferred as a dose specification medium. Dose to tissue medium Dmed then needs to be converted into dose to water in tissue Dw,med. Methods to calculate absorbed dose to differently sized water compartments/cavities inside tissue, infinitesimal (used for definition of absorbed dose), small, large or intermediate, are reviewed. Burlin theory is applied to estimate photon energies at which cavity sizes in the range 1 nm-10 mm can be considered small or large. Photon and electron energy spectra are calculated at 1 cm distance from the central axis in cylindrical phantoms of bone, muscle and adipose tissue for 20, 50, 300 keV photons and photons from 125I, 169Yb and 192Ir sources; ratios of mass-collision-stopping powers and mass energy absorption coefficients are calculated as applicable to convert Dmed into Dw,med for small and large cavities. Results show that 1-10 nm sized cavities are small at all investigated photon energies; 100 µm cavities are large only at photon energies <20 keV. A choice of an appropriate conversion coefficient Dw, med/Dmed is discussed in terms of the cavity size in relation to the size of important cellular targets. Free radicals from DNA bound water of nanometre dimensions contribute to DNA damage and cell killing and may be the most important water compartment in cells implying use of ratios of mass-collision-stopping powers for converting Dmed into Dw,med.

  20. A single TLD dose algorithm to satisfy federal standards and typical field conditions

    SciTech Connect

    Stanford, N.; McCurdy, D.E. )

    1990-06-01

    Modern whole-body dosimeters are often required to accurately measure the absorbed dose in a wide range of radiation fields. While programs are commonly developed around the fields tested as part of the National Voluntary Accreditation Program (NVLAP), the actual fields of application may be significantly different. Dose algorithms designed to meet the NVLAP standard, which emphasizes photons and high-energy beta radiation, may not be capable of the beta-energy discrimination necessary for accurate assessment of absorbed dose in the work environment. To address this problem, some processors use one algorithm for NVLAP testing and one or more different algorithms for the work environments. After several years of experience with a multiple algorithm approach, the Dosimetry Services Group of Yankee Atomic Electric Company (YAEC) developed a one-algorithm system for use with a four-element TLD badge using Li2B4O7 and CaSO4 phosphors. The design of the dosimeter allows the measurement of the effective energies of both photon and beta components of the radiation field, resulting in excellent mixed-field capability. The algorithm was successfully tested in all of the NVLAP photon and beta fields, as well as several non-NVLAP fields representative of the work environment. The work environment fields, including low- and medium-energy beta radiation and mixed fields of low-energy photons and beta particles, are often more demanding than the NVLAP fields. This paper discusses the development of the algorithm as well as some results of the system testing including: mixed-field irradiations, angular response, and a unique test to demonstrate the stability of the algorithm. An analysis of the uncertainty of the reported doses under various irradiation conditions is also presented.

  1. Feasibility study of dose reduction in digital breast tomosynthesis using non-local denoising algorithms

    NASA Astrophysics Data System (ADS)

    Vieira, Marcelo A. C.; de Oliveira, Helder C. R.; Nunes, Polyana F.; Borges, Lucas R.; Bakic, Predrag R.; Barufaldi, Bruno; Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2015-03-01

    The main purpose of this work is to study the ability of denoising algorithms to reduce the radiation dose in Digital Breast Tomosynthesis (DBT) examinations. Clinical use of DBT is normally performed in "combo-mode", in which, in addition to DBT projections, a 2D mammogram is taken with the standard radiation dose. As a result, patients have been exposed to radiation doses higher than used in digital mammography. Thus, efforts to reduce the radiation dose in DBT examinations are of great interest. However, a decrease in dose leads to an increased quantum noise level, and related decrease in image quality. This work is aimed at addressing this problem by the use of denoising techniques, which could allow for dose reduction while keeping the image quality acceptable. We have studied two "state of the art" denoising techniques for filtering the quantum noise due to the reduced dose in DBT projections: Non-local Means (NLM) and Block-matching 3D (BM3D). We acquired DBT projections at different dose levels of an anthropomorphic physical breast phantom with inserted simulated microcalcifications. Then, we found the optimal filtering parameters where the denoising algorithms are capable of recovering the quality from the DBT images acquired with the standard radiation dose. Results using objective image quality assessment metrics showed that BM3D algorithm achieved better noise adjustment (mean difference in peak signal to noise ratio < 0.1dB) and less blurring (mean difference in image sharpness ~ 6%) than the NLM for the projections acquired with lower radiation doses.

  2. Evaluation of an electron Monte Carlo dose calculation algorithm for treatment planning.

    PubMed

    Chamberland, Eve; Beaulieu, Luc; Lachance, Bernard

    2015-01-01

    The purpose of this study is to evaluate the accuracy of the electron Monte Carlo (eMC) dose calculation algorithm included in a commercial treatment planning system and compare its performance against an electron pencil beam algorithm. Several tests were performed to explore the system's behavior in simple geometries and in configurations encountered in clinical practice. The first series of tests were executed in a homogeneous water phantom, where experimental measurements and eMC-calculated dose distributions were compared for various combinations of energy and applicator. More specifically, we compared beam profiles and depth-dose curves at different source-to-surface distances (SSDs) and gantry angles, by using dose difference and distance to agreement. Also, we compared output factors, we studied the effects of algorithm input parameters, which are the random number generator seed, as well as the calculation grid size, and we performed a calculation time evaluation. Three different inhomogeneous solid phantoms were built, using high- and low-density materials inserts, to clinically simulate relevant heterogeneity conditions: a small air cylinder within a homogeneous phantom, a lung phantom, and a chest wall phantom. We also used an anthropomorphic phantom to perform comparison of eMC calculations to measurements. Finally, we proceeded with an evaluation of the eMC algorithm on a clinical case of nose cancer. In all mentioned cases, measurements, carried out by means of XV-2 films, radiographic films or EBT2 Gafchromic films. were used to compare eMC calculations with dose distributions obtained from an electron pencil beam algorithm. eMC calculations in the water phantom were accurate. Discrepancies for depth-dose curves and beam profiles were under 2.5% and 2 mm. Dose calculations with eMC for the small air cylinder and the lung phantom agreed within 2% and 4%, respectively. eMC calculations for the chest wall phantom and the anthropomorphic phantom also

  3. Evaluation of a new commercial Monte Carlo dose calculation algorithm for electron beams

    SciTech Connect

    Vandervoort, Eric J. Cygler, Joanna E.; Tchistiakova, Ekaterina; La Russa, Daniel J.

    2014-02-15

    Purpose: In this report the authors present the validation of a Monte Carlo dose calculation algorithm (XiO EMC from Elekta Software) for electron beams. Methods: Calculated and measured dose distributions were compared for homogeneous water phantoms and for a 3D heterogeneous phantom meant to approximate the geometry of a trachea and spine. Comparisons of measurements and calculated data were performed using 2D and 3D gamma index dose comparison metrics. Results: Measured outputs agree with calculated values within estimated uncertainties for standard and extended SSDs for open applicators, and for cutouts, with the exception of the 17 MeV electron beam at extended SSD for cutout sizes smaller than 5 × 5 cm{sup 2}. Good agreement was obtained between calculated and experimental depth dose curves and dose profiles (minimum number of measurements that pass a 2%/2 mm agreement 2D gamma index criteria for any applicator or energy was 97%). Dose calculations in a heterogeneous phantom agree with radiochromic film measurements (>98% of pixels pass a 3 dimensional 3%/2 mm γ-criteria) provided that the steep dose gradient in the depth direction is considered. Conclusions: Clinically acceptable agreement (at the 2%/2 mm level) between the measurements and calculated data for measurements in water are obtained for this dose calculation algorithm. Radiochromic film is a useful tool to evaluate the accuracy of electron MC treatment planning systems in heterogeneous media.

  4. Algorithm of pulmonary emphysema extraction using low dose thoracic 3D CT images

    NASA Astrophysics Data System (ADS)

    Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Omatsu, H.; Tominaga, K.; Eguchi, K.; Moriyama, N.

    2006-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to 100 thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  5. A pharmacogenetics-based warfarin maintenance dosing algorithm from Northern Chinese patients.

    PubMed

    Chen, Jinxing; Shao, Liying; Gong, Ling; Luo, Fang; Wang, Jin'e; Shi, Yi; Tan, Yu; Chen, Qianlong; Zhang, Yu; Hui, Rutai; Wang, Yibo

    2014-01-01

    Inconsistent associations with warfarin dose were observed in genetic variants except VKORC1 haplotype and CYP2C9*3 in Chinese people, and few studies on warfarin dose algorithm was performed in a large Chinese Han population lived in Northern China. Of 787 consenting patients with heart-valve replacements who were receiving long-term warfarin maintenance therapy, 20 related Single nucleotide polymorphisms were genotyped. Only VKORC1 and CYP2C9 SNPs were observed to be significantly associated with warfarin dose. In the derivation cohort (n = 551), warfarin dose variability was influenced, in decreasing order, by VKORC1 rs7294 (27.3%), CYP2C9*3(7.0%), body surface area(4.2%), age(2.7%), target INR(1.4%), CYP4F2 rs2108622 (0.7%), amiodarone use(0.6%), diabetes mellitus(0.6%), and digoxin use(0.5%), which account for 45.1% of the warfarin dose variability. In the validation cohort (n = 236), the actual maintenance dose was significantly correlated with predicted dose (r = 0.609, P<0.001). Our algorithm could improve the personalized management of warfarin use in Northern Chinese patients. PMID:25126975

  6. SU-E-T-313: The Accuracy of the Acuros XB Advanced Dose Calculation Algorithm for IMRT Dose Distributions in Head and Neck

    SciTech Connect

    Araki, F; Onizuka, R; Ohno, T; Tomiyama, Y; Hioki, K

    2014-06-01

    Purpose: To investigate the accuracy of the Acuros XB version 11 (AXB11) advanced dose calculation algorithm by comparing with Monte Caro (MC) calculations. The comparisons were performed with dose distributions for a virtual inhomogeneity phantom and intensity-modulated radiotherapy (IMRT) in head and neck. Methods: Recently, AXB based on Linear Boltzmann Transport Equation has been installed in the Eclipse treatment planning system (Varian Medical Oncology System, USA). The dose calculation accuracy of AXB11 was tested by the EGSnrc-MC calculations. In additions, AXB version 10 (AXB10) and Analytical Anisotropic Algorithm (AAA) were also used. First the accuracy of an inhomogeneity correction for AXB and AAA algorithms was evaluated by comparing with MC-calculated dose distributions for a virtual inhomogeneity phantom that includes water, bone, air, adipose, muscle, and aluminum. Next the IMRT dose distributions for head and neck were compared with the AXB and AAA algorithms and MC by means of dose volume histograms and three dimensional gamma analysis for each structure (CTV, OAR, etc.). Results: For dose distributions with the virtual inhomogeneity phantom, AXB was in good agreement with those of MC, except the dose in air region. The dose in air region decreased in order of MCalgorithms, ie: 0.700 MeV for MC, 0.711 MeV for AXB11, and 1.011 MeV for AXB 10. Since the AAA algorithm is based on the dose kernel of water, the doses in regions for air, bone, and aluminum considerably became higher than those of AXB and MC. The pass rates of the gamma analysis for IMRT dose distributions in head and neck were similar to those of MC in order of AXB11dose calculation accuracy of AXB11 was almost equivalent to the MC dose calculation.

  7. SU-E-T-202: Impact of Monte Carlo Dose Calculation Algorithm On Prostate SBRT Treatments

    SciTech Connect

    Venencia, C; Garrigo, E; Cardenas, J; Castro Pena, P

    2014-06-01

    Purpose: The purpose of this work was to quantify the dosimetric impact of using Monte Carlo algorithm on pre calculated SBRT prostate treatment with pencil beam dose calculation algorithm. Methods: A 6MV photon beam produced by a Novalis TX (BrainLAB-Varian) linear accelerator equipped with HDMLC was used. Treatment plans were done using 9 fields with Iplanv4.5 (BrainLAB) and dynamic IMRT modality. Institutional SBRT protocol uses a total dose to the prostate of 40Gy in 5 fractions, every other day. Dose calculation is done by pencil beam (2mm dose resolution), heterogeneity correction and dose volume constraint (UCLA) for PTV D95%=40Gy and D98%>39.2Gy, Rectum V20Gy<50%, V32Gy<20%, V36Gy<10% and V40Gy<5%, Bladder V20Gy<40% and V40Gy<10%, femoral heads V16Gy<5%, penile bulb V25Gy<3cc, urethra and overlap region between PTV and PRV Rectum Dmax<42Gy. 10 SBRT treatments plans were selected and recalculated using Monte Carlo with 2mm spatial resolution and mean variance of 2%. DVH comparisons between plans were done. Results: The average difference between PTV doses constraints were within 2%. However 3 plans have differences higher than 3% which does not meet the D98% criteria (>39.2Gy) and should have been renormalized. Dose volume constraint differences for rectum, bladder, femoral heads and penile bulb were les than 2% and within tolerances. Urethra region and overlapping between PTV and PRV Rectum shows increment of dose in all plans. The average difference for urethra region was 2.1% with a maximum of 7.8% and for the overlapping region 2.5% with a maximum of 8.7%. Conclusion: Monte Carlo dose calculation on dynamic IMRT treatments could affects on plan normalization. Dose increment in critical region of urethra and PTV overlapping region with PTV could have clinical consequences which need to be studied. The use of Monte Carlo dose calculation algorithm is limited because inverse planning dose optimization use only pencil beam.

  8. Comparison of dose calculation algorithms in slab phantoms with cortical bone equivalent heterogeneities

    SciTech Connect

    Carrasco, P.; Jornet, N.; Duch, M. A.; Panettieri, V.; Weber, L.; Eudaldo, T.; Ginjaume, M.; Ribas, M.

    2007-08-15

    To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10x10, 5x5, and 2x2 cm{sup 2}) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2x2 cm{sup 2} field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values

  9. Comparison of dose calculation algorithms in slab phantoms with cortical bone equivalent heterogeneities.

    PubMed

    Carrasco, P; Jornet, N; Duch, M A; Panettieri, V; Weber, L; Eudaldo, T; Ginjaume, M; Ribas, M

    2007-08-01

    To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10 x 10, 5 x 5, and 2 x 2 cm2) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2 x 2 cm2 field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values

  10. Pharmacokinetically guided algorithm of 5-fluorouracil dosing, a reliable strategy of precision chemotherapy for solid tumors: a meta-analysis

    PubMed Central

    Fang, Luo; Xin, Wenxiu; Ding, Haiying; Zhang, Yiwen; Zhong, Like; Luo, Hong; Li, Jingjing; Yang, Yunshan; Huang, Ping

    2016-01-01

    Precision medicine characterizes a new era of cancer care and provides each patient with the right drug at the right dose and time. However, the practice of precision dosing is hampered by a lack of smart dosing algorithms. A pharmacokinetically guided (PKG) dosing algorithm is considered to be the leading strategy for precision chemotherapy, although the effects of PKG dosing are not completely confirmed. Hence, we conducted a meta-analysis to evaluate the effects of the PKG algorithm of 5-fluorouracil (5-FU) dosing on patients with solid tumors. A comprehensive retrieval was performed to identify all of the prospective controlled studies that compared the body surface area (BSA)-based algorithm with the PKG algorithm of 5-FU in patients with solid tumors. Overall, four studies with 504 patients were included. The PKG algorithm significantly improved the objective response rate of 5-FU-based chemotherapy compared with the BSA-based algorithm. Furthermore, PKG dosing markedly decreased the risk of total grade 3/4 adverse drug reactions, especially those related to hematological toxicity. Overall, the PKG algorithm may serve as a reliable strategy for individualized dosing of 5-FU. PMID:27229175

  11. Pharmacokinetically guided algorithm of 5-fluorouracil dosing, a reliable strategy of precision chemotherapy for solid tumors: a meta-analysis.

    PubMed

    Fang, Luo; Xin, Wenxiu; Ding, Haiying; Zhang, Yiwen; Zhong, Like; Luo, Hong; Li, Jingjing; Yang, Yunshan; Huang, Ping

    2016-01-01

    Precision medicine characterizes a new era of cancer care and provides each patient with the right drug at the right dose and time. However, the practice of precision dosing is hampered by a lack of smart dosing algorithms. A pharmacokinetically guided (PKG) dosing algorithm is considered to be the leading strategy for precision chemotherapy, although the effects of PKG dosing are not completely confirmed. Hence, we conducted a meta-analysis to evaluate the effects of the PKG algorithm of 5-fluorouracil (5-FU) dosing on patients with solid tumors. A comprehensive retrieval was performed to identify all of the prospective controlled studies that compared the body surface area (BSA)-based algorithm with the PKG algorithm of 5-FU in patients with solid tumors. Overall, four studies with 504 patients were included. The PKG algorithm significantly improved the objective response rate of 5-FU-based chemotherapy compared with the BSA-based algorithm. Furthermore, PKG dosing markedly decreased the risk of total grade 3/4 adverse drug reactions, especially those related to hematological toxicity. Overall, the PKG algorithm may serve as a reliable strategy for individualized dosing of 5-FU. PMID:27229175

  12. The LANL model 8823 whole-body TLD and associated dose algorithm

    SciTech Connect

    Hoffman, J.M.; Mallett, M.W.

    1999-11-01

    The Los Alamos National Laboratory Model 8823 whole-body TLD has been designed to perform accurate dose estimates for beta, photon, and neutron radiations that are encountered in pure calibration, mixed calibration, and typical field radiation conditions. The radiation energies and field types for which the Model 8823 dosimeter is capable of measuring are described. The Model 8823 dosimeter has been accredited for all performance testing categories in the Department of Energy Laboratory Accrediation Program for external dosimetry systems. The philosophy used in the design of the Model 8823 dosimeter and the associated dose algorithm is to isolate the responses due to beta, photon, and neutron radiations; obtain radiation quality information; and make functional adjustments to the elemental readings to estimate the dose equivalent at 7, 300, and 1,000 mg cm{sup {minus}2}, representing the required reporting quantities for shallow, lens-of-the-eye, and deep dose, respectively.

  13. Dosimetric impact of Acuros XB deterministic radiation transport algorithm for heterogeneous dose calculation in lung cancer

    SciTech Connect

    Han Tao; Followill, David; Repchak, Roman; Molineu, Andrea; Howell, Rebecca; Salehpour, Mohammad; Mikell, Justin; Mourtada, Firas

    2013-05-15

    Purpose: The novel deterministic radiation transport algorithm, Acuros XB (AXB), has shown great potential for accurate heterogeneous dose calculation. However, the clinical impact between AXB and other currently used algorithms still needs to be elucidated for translation between these algorithms. The purpose of this study was to investigate the impact of AXB for heterogeneous dose calculation in lung cancer for intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Methods: The thorax phantom from the Radiological Physics Center (RPC) was used for this study. IMRT and VMAT plans were created for the phantom in the Eclipse 11.0 treatment planning system. Each plan was delivered to the phantom three times using a Varian Clinac iX linear accelerator to ensure reproducibility. Thermoluminescent dosimeters (TLDs) and Gafchromic EBT2 film were placed inside the phantom to measure delivered doses. The measurements were compared with dose calculations from AXB 11.0.21 and the anisotropic analytical algorithm (AAA) 11.0.21. Two dose reporting modes of AXB, dose-to-medium in medium (D{sub m,m}) and dose-to-water in medium (D{sub w,m}), were studied. Point doses, dose profiles, and gamma analysis were used to quantify the agreement between measurements and calculations from both AXB and AAA. The computation times for AAA and AXB were also evaluated. Results: For the RPC lung phantom, AAA and AXB dose predictions were found in good agreement to TLD and film measurements for both IMRT and VMAT plans. TLD dose predictions were within 0.4%-4.4% to AXB doses (both D{sub m,m} and D{sub w,m}); and within 2.5%-6.4% to AAA doses, respectively. For the film comparisons, the gamma indexes ({+-}3%/3 mm criteria) were 94%, 97%, and 98% for AAA, AXB{sub Dm,m}, and AXB{sub Dw,m}, respectively. The differences between AXB and AAA in dose-volume histogram mean doses were within 2% in the planning target volume, lung, heart, and within 5% in the spinal cord

  14. A comparison between anisotropic analytical and multigrid superposition dose calculation algorithms in radiotherapy treatment planning.

    PubMed

    Wu, Vincent W C; Tse, Teddy K H; Ho, Cola L M; Yeung, Eric C Y

    2013-01-01

    Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time

  15. A comparison between anisotropic analytical and multigrid superposition dose calculation algorithms in radiotherapy treatment planning

    SciTech Connect

    Wu, Vincent W.C.; Tse, Teddy K.H.; Ho, Cola L.M.; Yeung, Eric C.Y.

    2013-07-01

    Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time.

  16. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy.

    PubMed

    Schuemann, J; Dowdell, S; Grassberger, C; Min, C H; Paganetti, H

    2014-08-01

    The purpose of this study was to assess the possibility of introducing site-specific range margins to replace current generic margins in proton therapy. Further, the goal was to study the potential of reducing margins with current analytical dose calculations methods. For this purpose we investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo (MC) simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for seven disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head and neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and MC algorithms to obtain the average range differences and root mean square deviation for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing MC dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head and neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2

  17. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy

    NASA Astrophysics Data System (ADS)

    Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.

    2014-08-01

    The purpose of this study was to assess the possibility of introducing site-specific range margins to replace current generic margins in proton therapy. Further, the goal was to study the potential of reducing margins with current analytical dose calculations methods. For this purpose we investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo (MC) simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for seven disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head and neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and MC algorithms to obtain the average range differences and root mean square deviation for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing MC dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head and neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be

  18. Development of a deformable dosimetric phantom to verify dose accumulation algorithms for adaptive radiotherapy.

    PubMed

    Zhong, Hualiang; Adams, Jeffrey; Glide-Hurst, Carri; Zhang, Hualin; Li, Haisen; Chetty, Indrin J

    2016-01-01

    Adaptive radiotherapy may improve treatment outcomes for lung cancer patients. Because of the lack of an effective tool for quality assurance, this therapeutic modality is not yet accepted in clinic. The purpose of this study is to develop a deformable physical phantom for validation of dose accumulation algorithms in regions with heterogeneous mass. A three-dimensional (3D) deformable phantom was developed containing a tissue-equivalent tumor and heterogeneous sponge inserts. Thermoluminescent dosimeters (TLDs) were placed at multiple locations in the phantom each time before dose measurement. Doses were measured with the phantom in both the static and deformed cases. The deformation of the phantom was actuated by a motor driven piston. 4D computed tomography images were acquired to calculate 3D doses at each phase using Pinnacle and EGSnrc/DOSXYZnrc. These images were registered using two registration software packages: VelocityAI and Elastix. With the resultant displacement vector fields (DVFs), the calculated 3D doses were accumulated using a mass-and energy congruent mapping method and compared to those measured by the TLDs at four typical locations. In the static case, TLD measurements agreed with all the algorithms by 1.8% at the center of the tumor volume and by 4.0% in the penumbra. In the deformable case, the phantom's deformation was reproduced within 1.1 mm. For the 3D dose calculated by Pinnacle, the total dose accumulated with the Elastix DVF agreed well to the TLD measurements with their differences <2.5% at four measured locations. When the VelocityAI DVF was used, their difference increased up to 11.8%. For the 3D dose calculated by EGSnrc/DOSXYZnrc, the total doses accumulated with the two DVFs were within 5.7% of the TLD measurements which are slightly over the rate of 5% for clinical acceptance. The detector-embedded deformable phantom allows radiation dose to be measured in a dynamic environment, similar to deforming lung tissues, supporting

  19. Development of a deformable dosimetric phantom to verify dose accumulation algorithms for adaptive radiotherapy

    PubMed Central

    Zhong, Hualiang; Adams, Jeffrey; Glide-Hurst, Carri; Zhang, Hualin; Li, Haisen; Chetty, Indrin J.

    2016-01-01

    Adaptive radiotherapy may improve treatment outcomes for lung cancer patients. Because of the lack of an effective tool for quality assurance, this therapeutic modality is not yet accepted in clinic. The purpose of this study is to develop a deformable physical phantom for validation of dose accumulation algorithms in regions with heterogeneous mass. A three-dimensional (3D) deformable phantom was developed containing a tissue-equivalent tumor and heterogeneous sponge inserts. Thermoluminescent dosimeters (TLDs) were placed at multiple locations in the phantom each time before dose measurement. Doses were measured with the phantom in both the static and deformed cases. The deformation of the phantom was actuated by a motor driven piston. 4D computed tomography images were acquired to calculate 3D doses at each phase using Pinnacle and EGSnrc/DOSXYZnrc. These images were registered using two registration software packages: VelocityAI and Elastix. With the resultant displacement vector fields (DVFs), the calculated 3D doses were accumulated using a mass-and energy congruent mapping method and compared to those measured by the TLDs at four typical locations. In the static case, TLD measurements agreed with all the algorithms by 1.8% at the center of the tumor volume and by 4.0% in the penumbra. In the deformable case, the phantom's deformation was reproduced within 1.1 mm. For the 3D dose calculated by Pinnacle, the total dose accumulated with the Elastix DVF agreed well to the TLD measurements with their differences <2.5% at four measured locations. When the VelocityAI DVF was used, their difference increased up to 11.8%. For the 3D dose calculated by EGSnrc/DOSXYZnrc, the total doses accumulated with the two DVFs were within 5.7% of the TLD measurements which are slightly over the rate of 5% for clinical acceptance. The detector-embedded deformable phantom allows radiation dose to be measured in a dynamic environment, similar to deforming lung tissues, supporting

  20. [Personal computer interactive algorithm for estimating radiologic contamination and doses after a nuclear accident in Europe].

    PubMed

    Tabet, E

    2001-01-01

    The algorithm RANA (radiological assessment of nuclear accidents) is a tool which can be exploited to estimate the space and time structure of the radiological consequences of a radioactive release following a nuclear accident in Europe. The algorithm, formulated in the language of Mathematica, can be run on a personal computer. It uses simplified physical assumptions as for the the diffusion of the cloud and the transfer of the contamination to the food chain. The user gets the needed information by means of interactive windows that allow a fast evaluation of dose and contamination profiles. Calculations are performed either starting from the source terms or from the knowledge of experimental contamination data. Radiological consequences, such as individual or collective doses from several exposure paths, are parametrized in terms of the atmospheric diffusion categories. PMID:11758278

  1. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy

    PubMed Central

    Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.

    2014-01-01

    The purpose of this study was to investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for 7 disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head & neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and Monte Carlo algorithms to obtain the average range differences (ARD) and root mean square deviation (RMSD) for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation (ADD) of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing Monte Carlo dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head & neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be needed for breast, lung and head & neck treatments. We conclude that currently used generic range uncertainty margins in proton therapy should be redefined site specific and that complex geometries may require a field specific

  2. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures

    SciTech Connect

    Neylon, J. Sheng, K.; Yu, V.; Low, D. A.; Kupelian, P.; Santhanam, A.; Chen, Q.

    2014-10-15

    Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria

  3. Monte Carlo photon beam modeling and commissioning for radiotherapy dose calculation algorithm.

    PubMed

    Toutaoui, A; Ait chikh, S; Khelassi-Toutaoui, N; Hattali, B

    2014-11-01

    The aim of the present work was a Monte Carlo verification of the Multi-grid superposition (MGS) dose calculation algorithm implemented in the CMS XiO (Elekta) treatment planning system and used to calculate the dose distribution produced by photon beams generated by the linear accelerator (linac) Siemens Primus. The BEAMnrc/DOSXYZnrc (EGSnrc package) Monte Carlo model of the linac head was used as a benchmark. In the first part of the work, the BEAMnrc was used for the commissioning of a 6 MV photon beam and to optimize the linac description to fit the experimental data. In the second part, the MGS dose distributions were compared with DOSXYZnrc using relative dose error comparison and γ-index analysis (2%/2 mm, 3%/3 mm), in different dosimetric test cases. Results show good agreement between simulated and calculated dose in homogeneous media for square and rectangular symmetric fields. The γ-index analysis confirmed that for most cases the MGS model and EGSnrc doses are within 3% or 3 mm. PMID:24947967

  4. MO-E-17A-05: Individualized Patient Dosimetry in CT Using the Patient Dose (PATDOSE) Algorithm

    SciTech Connect

    Hernandez, A; Boone, J

    2014-06-15

    Purpose: Radiation dose to the patient undergoing a CT examination has been the focus of many recent studies. While CTDIvol and SSDE-based methods are important tools for patient dose management, the CT image data provides important information with respect to CT dose and its distribution. Coupled with the known geometry and output factors (kV, mAs, pitch, etc.) of the CT scanner, the CT dataset can be used directly for computing absorbed dose. Methods: The HU numbers in a patient's CT data set can be converted to linear attenuation coefficients (LACs) with some assumptions. With this (PAT-DOSE) method, which is not Monte Carlo-based, the primary and scatter dose are computed separately. The primary dose is computed directly from the geometry of the scanner, x-ray spectrum, and the known patient LACs. Once the primary dose has been computed to all voxels in the patient, the scatter dose algorithm redistributes a fraction of the absorbed primary dose (based on the HU number of each source voxel), and the methods here invoke both tissue attenuation and absorption and solid angle geometry. The scatter dose algorithm can be run N times to include Nth-scatter redistribution. PAT-DOSE was deployed using simple PMMA phantoms, to validate its performance against Monte Carlo-derived dose distributions. Results: Comparison between PAT-DOSE and MCNPX primary dose distributions showed excellent agreement for several scan lengths. The 1st-scatter dose distributions showed relatively higher-amplitude, long-range scatter tails for the PAT-DOSE algorithm then for MCNPX simulations. Conclusion: The PAT-DOSE algorithm provides a fast, deterministic assessment of the 3-D dose distribution in CT, making use of scanner geometry and the patient image data set. The preliminary implementation of the algorithm produces accurate primary dose distributions however achieving scatter distribution agreement is more challenging. Addressing the polyenergetic x-ray spectrum and spatially dependent

  5. Dosimetric validation of the Acuros XB Advanced Dose Calculation algorithm: fundamental characterization in water

    NASA Astrophysics Data System (ADS)

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Mancosu, Pietro; Cozzi, Luca

    2011-03-01

    A new algorithm, Acuros® XB Advanced Dose Calculation, has been introduced by Varian Medical Systems in the Eclipse planning system for photon dose calculation in external radiotherapy. Acuros XB is based on the solution of the linear Boltzmann transport equation (LBTE). The LBTE describes the macroscopic behaviour of radiation particles as they travel through and interact with matter. The implementation of Acuros XB in Eclipse has not been assessed; therefore, it is necessary to perform these pre-clinical validation tests to determine its accuracy. This paper summarizes the results of comparisons of Acuros XB calculations against measurements and calculations performed with a previously validated dose calculation algorithm, the Anisotropic Analytical Algorithm (AAA). The tasks addressed in this paper are limited to the fundamental characterization of Acuros XB in water for simple geometries. Validation was carried out for four different beams: 6 and 15 MV beams from a Varian Clinac 2100 iX, and 6 and 10 MV 'flattening filter free' (FFF) beams from a TrueBeam linear accelerator. The TrueBeam FFF are new beams recently introduced in clinical practice on general purpose linear accelerators and have not been previously reported on. Results indicate that Acuros XB accurately reproduces measured and calculated (with AAA) data and only small deviations were observed for all the investigated quantities. In general, the overall degree of accuracy for Acuros XB in simple geometries can be stated to be within 1% for open beams and within 2% for mechanical wedges. The basic validation of the Acuros XB algorithm was therefore considered satisfactory for both conventional photon beams as well as for FFF beams of new generation linacs such as the Varian TrueBeam.

  6. Dose algorithm for EXTRAD 4100S extremity dosimeter for use at Sandia National Laboratories.

    SciTech Connect

    Potter, Charles Augustus

    2011-05-01

    An updated algorithm for the EXTRAD 4100S extremity dosimeter has been derived. This algorithm optimizes the binning of dosimeter element ratios and uses a quadratic function to determine the response factors for low response ratios. This results in lower systematic bias across all test categories and eliminates the need for the 'red strap' algorithm that was used for high energy beta/gamma emitting radionuclides. The Radiation Protection Dosimetry Program (RPDP) at Sandia National Laboratories uses the Thermo Fisher EXTRAD 4100S extremity dosimeter, shown in Fig 1.1 to determine shallow dose to the extremities of potentially exposed individuals. This dosimeter consists of two LiF TLD elements or 'chipstrates', one of TLD-700 ({sup 7}Li) and one of TLD-100 (natural Li) separated by a tin filter. Following readout and background subtraction, the ratio of the responses of the two elements is determined defining the penetrability of the incident radiation. While this penetrability approximates the incident energy of the radiation, X-rays and beta particles exist in energy distributions that make determination of dose conversion factors less straightforward in their determination.

  7. Dosimetric validation of the Acuros XB Advanced Dose Calculation algorithm: fundamental characterization in water

    NASA Astrophysics Data System (ADS)

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Mancosu, Pietro; Cozzi, Luca

    2011-05-01

    This corrigendum intends to clarify some important points that were not clearly or properly addressed in the original paper, and for which the authors apologize. The original description of the first Acuros algorithm is from the developers, published in Physics in Medicine and Biology by Vassiliev et al (2010) in the paper entitled 'Validation of a new grid-based Boltzmann equation solver for dose calculation in radiotherapy with photon beams'. The main equations describing the algorithm reported in our paper, implemented as the 'Acuros XB Advanced Dose Calculation Algorithm' in the Varian Eclipse treatment planning system, were originally described (for the original Acuros algorithm) in the above mentioned paper by Vassiliev et al. The intention of our description in our paper was to give readers an overview of the algorithm, not pretending to have authorship of the algorithm itself (used as implemented in the planning system). Unfortunately our paper was not clear, particularly in not allocating full credit to the work published by Vassiliev et al on the original Acuros algorithm. Moreover, it is important to clarify that we have not adapted any existing algorithm, but have used the Acuros XB implementation in the Eclipse planning system from Varian. In particular, the original text of our paper should have been as follows: On page 1880 the sentence 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008, 2010). Acuros XB builds upon many of the methods in Attila, but represents a ground-up rewrite of the solver where the methods were adapted especially for external photon beam dose calculations' should be corrected to 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008). A new algorithm called Acuros, developed by the Transpire Inc. group, was

  8. SU-E-T-164: Evaluation of Electron Dose Distribution Using Two Algorithms

    SciTech Connect

    Liu, D; Li, Z; Shang, K; Jing, Z; Wang, J; Miao, M; Yang, J

    2014-06-01

    Purpose: To appreciate the difference of electron dose distributions calculated from the Monte Carlo and Electron 3D algorithms of radiotherapy in a heterogeneous phantom. Methods: A phantom consisted of two different materials (lungs mimicked by low-density cork and others by polystyrene) with an 11x16 cm field size (SSD = 100 cm) was utilized to estimate the two-dimensional dose distributions under 6 and 18 MeV beams. On behalf of two different types of tissue, the heterogeneous phantom was comprised of 3 identical slabs in the longitudinal direction with a thickness of 1 cm for each slab and 2 with a thickness of 2.5 cm. The Monte Carlo/MCTP application package constituted of five codes was performed to simulate the electron beams of a Varian Clinac 23IX. A 20x20 cm2 type III (open walled) applicator was used in these simulations. It has been shown elsewhere that the agreement of the phase space data between the calculation results of MCTP application package and the measured data were within 2% on depth-dose and transverse profiles, as well as output factor calculations. The electron 3D algorithm owned by Pinnacle 8.0m and the MCTP application package were applied for the two-dimensional dose distributions calculation. The curves at 50% and 100%-prescribed dose were observed for 6 and 18 MeV beams, respectively. Results: The MC calculations results were compared with the electron 3D calculations in terms of two-dimensional dose distributions for 6 and 18 MeV beams showed excellent agreement except in distal boundary where it was the very junction of the high and low-density region. Conclusions: The Monte Carlo/MCTP method could be used to better reflect the dose variation caused by heterogeneous tissues. Conclusion: A case study showed that the Monte Carlo/MCTP method could be used to better reflect the dose variation caused by heterogeneous tissues.

  9. X-Ray Dose Reduction in Abdominal Computed Tomography Using Advanced Iterative Reconstruction Algorithms

    PubMed Central

    Ning, Peigang; Zhu, Shaocheng; Shi, Dapeng; Guo, Ying; Sun, Minghua

    2014-01-01

    Objective This work aims to explore the effects of adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) algorithms in reducing computed tomography (CT) radiation dosages in abdominal imaging. Methods CT scans on a standard male phantom were performed at different tube currents. Images at the different tube currents were reconstructed with the filtered back-projection (FBP), 50% ASiR and MBIR algorithms and compared. The CT value, image noise and contrast-to-noise ratios (CNRs) of the reconstructed abdominal images were measured. Volumetric CT dose indexes (CTDIvol) were recorded. Results At different tube currents, 50% ASiR and MBIR significantly reduced image noise and increased the CNR when compared with FBP. The minimal tube current values required by FBP, 50% ASiR, and MBIR to achieve acceptable image quality using this phantom were 200, 140, and 80 mA, respectively. At the identical image quality, 50% ASiR and MBIR reduced the radiation dose by 35.9% and 59.9% respectively when compared with FBP. Conclusions Advanced iterative reconstruction techniques are able to reduce image noise and increase image CNRs. Compared with FBP, 50% ASiR and MBIR reduced radiation doses by 35.9% and 59.9%, respectively. PMID:24664174

  10. Adapted Prescription Dose for Monte Carlo Algorithm in Lung SBRT: Clinical Outcome on 205 Patients

    PubMed Central

    Bibault, Jean-Emmanuel; Mirabel, Xavier; Lacornerie, Thomas; Tresch, Emmanuelle; Reynaert, Nick; Lartigau, Eric

    2015-01-01

    Purpose SBRT is the standard of care for inoperable patients with early-stage lung cancer without lymph node involvement. Excellent local control rates have been reported in a large number of series. However, prescription doses and calculation algorithms vary to a great extent between studies, even if most teams prescribe to the D95 of the PTV. Type A algorithms are known to produce dosimetric discrepancies in heterogeneous tissues such as lungs. This study was performed to present a Monte Carlo (MC) prescription dose for NSCLC adapted to lesion size and location and compare the clinical outcomes of two cohorts of patients treated with a standard prescription dose calculated by a type A algorithm or the proposed MC protocol. Patients and Methods Patients were treated from January 2011 to April 2013 with a type B algorithm (MC) prescription with 54 Gy in three fractions for peripheral lesions with a diameter under 30 mm, 60 Gy in 3 fractions for lesions with a diameter over 30 mm, and 55 Gy in five fractions for central lesions. Clinical outcome was compared to a series of 121 patients treated with a type A algorithm (TA) with three fractions of 20 Gy for peripheral lesions and 60 Gy in five fractions for central lesions prescribed to the PTV D95 until January 2011. All treatment plans were recalculated with both algorithms for this study. Spearman’s rank correlation coefficient was calculated for GTV and PTV. Local control, overall survival and toxicity were compared between the two groups. Results 205 patients with 214 lesions were included in the study. Among these, 93 lesions were treated with MC and 121 were treated with TA. Overall survival rates were 86% and 94% at one and two years, respectively. Local control rates were 79% and 93% at one and two years respectively. There was no significant difference between the two groups for overall survival (p = 0.785) or local control (p = 0.934). Fifty-six patients (27%) developed grade I lung fibrosis without

  11. Adaptation of the CVT algorithm for catheter optimization in high dose rate brachytherapy

    SciTech Connect

    Poulin, Eric; Fekete, Charles-Antoine Collins; Beaulieu, Luc; Létourneau, Mélanie; Fenster, Aaron; Pouliot, Jean

    2013-11-15

    Purpose: An innovative, simple, and fast method to optimize the number and position of catheters is presented for prostate and breast high dose rate (HDR) brachytherapy, both for arbitrary templates or template-free implants (such as robotic templates).Methods: Eight clinical cases were chosen randomly from a bank of patients, previously treated in our clinic to test our method. The 2D Centroidal Voronoi Tessellations (CVT) algorithm was adapted to distribute catheters uniformly in space, within the maximum external contour of the planning target volume. The catheters optimization procedure includes the inverse planning simulated annealing algorithm (IPSA). Complete treatment plans can then be generated from the algorithm for different number of catheters. The best plan is chosen from different dosimetry criteria and will automatically provide the number of catheters and their positions. After the CVT algorithm parameters were optimized for speed and dosimetric results, it was validated against prostate clinical cases, using clinically relevant dose parameters. The robustness to implantation error was also evaluated. Finally, the efficiency of the method was tested in breast interstitial HDR brachytherapy cases.Results: The effect of the number and locations of the catheters on prostate cancer patients was studied. Treatment plans with a better or equivalent dose distributions could be obtained with fewer catheters. A better or equal prostate V100 was obtained down to 12 catheters. Plans with nine or less catheters would not be clinically acceptable in terms of prostate V100 and D90. Implantation errors up to 3 mm were acceptable since no statistical difference was found when compared to 0 mm error (p > 0.05). No significant difference in dosimetric indices was observed for the different combination of parameters within the CVT algorithm. A linear relation was found between the number of random points and the optimization time of the CVT algorithm. Because the

  12. Development and verification of an analytical algorithm to predict absorbed dose distributions in ocular proton therapy using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Koch, Nicholas C.; Newhauser, Wayne D.

    2010-02-01

    Proton beam radiotherapy is an effective and non-invasive treatment for uveal melanoma. Recent research efforts have focused on improving the dosimetric accuracy of treatment planning and overcoming the present limitation of relative analytical dose calculations. Monte Carlo algorithms have been shown to accurately predict dose per monitor unit (D/MU) values, but this has yet to be shown for analytical algorithms dedicated to ocular proton therapy, which are typically less computationally expensive than Monte Carlo algorithms. The objective of this study was to determine if an analytical method could predict absolute dose distributions and D/MU values for a variety of treatment fields like those used in ocular proton therapy. To accomplish this objective, we used a previously validated Monte Carlo model of an ocular nozzle to develop an analytical algorithm to predict three-dimensional distributions of D/MU values from pristine Bragg peaks and therapeutically useful spread-out Bragg peaks (SOBPs). Results demonstrated generally good agreement between the analytical and Monte Carlo absolute dose calculations. While agreement in the proximal region decreased for beams with less penetrating Bragg peaks compared with the open-beam condition, the difference was shown to be largely attributable to edge-scattered protons. A method for including this effect in any future analytical algorithm was proposed. Comparisons of D/MU values showed typical agreement to within 0.5%. We conclude that analytical algorithms can be employed to accurately predict absolute proton dose distributions delivered by an ocular nozzle.

  13. Development of a dose algorithm for the modified panasonic UD-802 personal dosimeter used at three mile island

    SciTech Connect

    Miklos, J. A.; Plato, P.

    1988-01-01

    During the fall of 1981, the personnel dosimetry group at GPU Nuclear Corporation at Three Mile Island (TMI) requested assistance from The University of Michigan (UM) in developing a dose algorithm for use at TMI-2. The dose algorithm had to satisfy the specific needs of TMI-2, particularly the need to distinguish beta-particle emitters of different energies, as well as having the capability of satisfying the requirements of the American National Standards Institute (ANSI) N13.11-1983 standard. A standard Panasonic UD-802 dosimeter was modified by having the plastic filter over element 2 removed. The dosimeter and hanger consists of the elements with a 14 mg/cm/sup 2/ density thickness and the filtrations shown. The hanger on this dosimeter had a double open window to facilitate monitoring for low-energy beta particles. The dose algorithm was written to satisfy the requirements of the ANSI N13.11-1983 standard, to include /sup 204/Tl with mixtures of /sup 204/Tl with /sup 90/Sr//sup 90/Y and /sup 137/Cs, and to include 81- and 200-keV average energy X-ray spectra. Stress tests were conducted to observe the algorithm performance to low doses, temperature, humidity, and the residual response following high-dose irradiations. The ability of the algorithm to determine dose from the beta particles of /sup 147/Pm was also investigated.

  14. Noise-reducing algorithms do not necessarily provide superior dose optimisation for hepatic lesion detection with multidetector CT

    PubMed Central

    Dobeli, K L; Lewis, S J; Meikle, S R; Thiele, D L; Brennan, P C

    2013-01-01

    Objective: To compare the dose-optimisation potential of a smoothing filtered backprojection (FBP) and a hybrid FBP/iterative algorithm to that of a standard FBP algorithm at three slice thicknesses for hepatic lesion detection with multidetector CT. Methods: A liver phantom containing a 9.5-mm opacity with a density of 10 HU below background was scanned at 125, 100, 75, 50 and 25 mAs. Data were reconstructed with standard FBP (B), smoothing FBP (A) and hybrid FBP/iterative (iDose4) algorithms at 5-, 3- and 1-mm collimation. 10 observers marked opacities using a four-point confidence scale. Jackknife alternative free-response receiver operating characteristic figure of merit (FOM), sensitivity and noise were calculated. Results: Compared with the 125-mAs/5-mm setting for each algorithm, significant reductions in FOM (p<0.05) and sensitivity (p<0.05) were found for all three algorithms for all exposures at 1-mm thickness and for all slice thicknesses at 25 mAs, with the exception of the 25-mAs/5-mm setting for the B algorithm. Sensitivity was also significantly reduced for all exposures at 3-mm thickness for the A algorithm (p<0.05). Noise for the A and iDose4 algorithms was approximately 13% and 21% lower, respectively, than for the B algorithm. Conclusion: Superior performance for hepatic lesion detection was not shown with either a smoothing FBP algorithm or a hybrid FBP/iterative algorithm compared with a standard FBP technique, even though noise reduction with thinner slices was demonstrated with the alternative approaches. Advances in knowledge: Reductions in image noise with non-standard CT algorithms do not necessarily translate to an improvement in low-contrast object detection. PMID:23392194

  15. Toward adaptive radiotherapy for head and neck patients: Uncertainties in dose warping due to the choice of deformable registration algorithm

    SciTech Connect

    Veiga, Catarina Royle, Gary; Lourenço, Ana Mónica; Mouinuddin, Syed; Herk, Marcel van; Modat, Marc; Ourselin, Sébastien; McClelland, Jamie R.

    2015-02-15

    Purpose: The aims of this work were to evaluate the performance of several deformable image registration (DIR) algorithms implemented in our in-house software (NiftyReg) and the uncertainties inherent to using different algorithms for dose warping. Methods: The authors describe a DIR based adaptive radiotherapy workflow, using CT and cone-beam CT (CBCT) imaging. The transformations that mapped the anatomy between the two time points were obtained using four different DIR approaches available in NiftyReg. These included a standard unidirectional algorithm and more sophisticated bidirectional ones that encourage or ensure inverse consistency. The forward (CT-to-CBCT) deformation vector fields (DVFs) were used to propagate the CT Hounsfield units and structures to the daily geometry for “dose of the day” calculations, while the backward (CBCT-to-CT) DVFs were used to remap the dose of the day onto the planning CT (pCT). Data from five head and neck patients were used to evaluate the performance of each implementation based on geometrical matching, physical properties of the DVFs, and similarity between warped dose distributions. Geometrical matching was verified in terms of dice similarity coefficient (DSC), distance transform, false positives, and false negatives. The physical properties of the DVFs were assessed calculating the harmonic energy, determinant of the Jacobian, and inverse consistency error of the transformations. Dose distributions were displayed on the pCT dose space and compared using dose difference (DD), distance to dose difference, and dose volume histograms. Results: All the DIR algorithms gave similar results in terms of geometrical matching, with an average DSC of 0.85 ± 0.08, but the underlying properties of the DVFs varied in terms of smoothness and inverse consistency. When comparing the doses warped by different algorithms, we found a root mean square DD of 1.9% ± 0.8% of the prescribed dose (pD) and that an average of 9% ± 4% of

  16. softMip: a novel projection algorithm for ultra-low-dose computed tomography.

    PubMed

    Meyer, Henning; Juran, Ralf; Rogalla, Patrik

    2008-01-01

    Two projection algorithms are currently available for viewing computed tomography (CT) data sets: average projection (AVG) and maximum intensity projection (MIP). Although AVG images feature good suppression of image noise but reduced edge sharpness, MIP images are characterized by good edge sharpness but also amplify image noise. Ultra-low-dose (ULD) CT has very low radiation exposure but has high image noise. Maximum intensity projection images of ULDCT data sets amplify image noise and are therefore unsuitable for image interpretation in the routine clinical setting. We developed a synthesis of both algorithms that tries to unite the respective advantages. The resulting softMip algorithm was implemented in C++ and installed on a workstation. Depending on the settings used, softMip images can represent any graduation between MIP and AVG. The new softMip algorithm was evaluated and compared with MIP and AVG in terms of image noise and edge sharpness in a series of phantom experiments performed on 7 different CT scanners. Furthermore, image quality of the transition from AVG to MIP by means of softMip was compared with the image quality of simply blending AVG and MIP. Images generated with softMip showed less image noise than MIP images (P < 0.0005) and higher edge sharpness than AVG images (P< 0.0005). The softMip transition from AVG to MIP had a better ratio of edge sharpness and image noise than blending (P < 0.0005). Our results suggest that softMip is a very promising projection procedure for postprocessing cross-sectional image data, especially ULDCT data sets. PMID:18520560

  17. The impact of low-Z and high-Z metal implants in IMRT: A Monte Carlo study of dose inaccuracies in commercial dose algorithms

    SciTech Connect

    Spadea, Maria Francesca; Verburg, Joost Mathias; Seco, Joao; Baroni, Guido

    2014-01-15

    Purpose: The aim of the study was to evaluate the dosimetric impact of low-Z and high-Z metallic implants on IMRT plans. Methods: Computed tomography (CT) scans of three patients were analyzed to study effects due to the presence of Titanium (low-Z), Platinum and Gold (high-Z) inserts. To eliminate artifacts in CT images, a sinogram-based metal artifact reduction algorithm was applied. IMRT dose calculations were performed on both the uncorrected and corrected images using a commercial planning system (convolution/superposition algorithm) and an in-house Monte Carlo platform. Dose differences between uncorrected and corrected datasets were computed and analyzed using gamma index (Pγ{sub <1}) and setting 2 mm and 2% as distance to agreement and dose difference criteria, respectively. Beam specific depth dose profiles across the metal were also examined. Results: Dose discrepancies between corrected and uncorrected datasets were not significant for low-Z material. High-Z materials caused under-dosage of 20%–25% in the region surrounding the metal and over dosage of 10%–15% downstream of the hardware. Gamma index test yielded Pγ{sub <1}>99% for all low-Z cases; while for high-Z cases it returned 91% < Pγ{sub <1}< 99%. Analysis of the depth dose curve of a single beam for low-Z cases revealed that, although the dose attenuation is altered inside the metal, it does not differ downstream of the insert. However, for high-Z metal implants the dose is increased up to 10%–12% around the insert. In addition, Monte Carlo method was more sensitive to the presence of metal inserts than superposition/convolution algorithm. Conclusions: The reduction in terms of dose of metal artifacts in CT images is relevant for high-Z implants. In this case, dose distribution should be calculated using Monte Carlo algorithms, given their superior accuracy in dose modeling in and around the metal. In addition, the knowledge of the composition of metal inserts improves the accuracy of

  18. A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation

    NASA Astrophysics Data System (ADS)

    Gu, Xuejun; Jelen, Urszula; Li, Jinsheng; Jia, Xun; Jiang, Steve B.

    2011-06-01

    Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (~5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning.

  19. A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation.

    PubMed

    Gu, Xuejun; Jelen, Urszula; Li, Jinsheng; Jia, Xun; Jiang, Steve B

    2011-06-01

    Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (∼5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning. PMID:21558589

  20. Characterization of adaptive statistical iterative reconstruction algorithm for dose reduction in CT: A pediatric oncology perspective

    SciTech Connect

    Brady, S. L.; Yee, B. S.; Kaufman, R. A.

    2012-09-15

    Purpose: This study demonstrates a means of implementing an adaptive statistical iterative reconstruction (ASiR Trade-Mark-Sign ) technique for dose reduction in computed tomography (CT) while maintaining similar noise levels in the reconstructed image. The effects of image quality and noise texture were assessed at all implementation levels of ASiR Trade-Mark-Sign . Empirically derived dose reduction limits were established for ASiR Trade-Mark-Sign for imaging of the trunk for a pediatric oncology population ranging from 1 yr old through adolescence/adulthood. Methods: Image quality was assessed using metrics established by the American College of Radiology (ACR) CT accreditation program. Each image quality metric was tested using the ACR CT phantom with 0%-100% ASiR Trade-Mark-Sign blended with filtered back projection (FBP) reconstructed images. Additionally, the noise power spectrum (NPS) was calculated for three common reconstruction filters of the trunk. The empirically derived limitations on ASiR Trade-Mark-Sign implementation for dose reduction were assessed using (1, 5, 10) yr old and adolescent/adult anthropomorphic phantoms. To assess dose reduction limits, the phantoms were scanned in increments of increased noise index (decrementing mA using automatic tube current modulation) balanced with ASiR Trade-Mark-Sign reconstruction to maintain noise equivalence of the 0% ASiR Trade-Mark-Sign image. Results: The ASiR Trade-Mark-Sign algorithm did not produce any unfavorable effects on image quality as assessed by ACR criteria. Conversely, low-contrast resolution was found to improve due to the reduction of noise in the reconstructed images. NPS calculations demonstrated that images with lower frequency noise had lower noise variance and coarser graininess at progressively higher percentages of ASiR Trade-Mark-Sign reconstruction; and in spite of the similar magnitudes of noise, the image reconstructed with 50% or more ASiR Trade-Mark-Sign presented a more

  1. A generalized 2D pencil beam scaling algorithm for proton dose calculation in heterogeneous slab geometries

    PubMed Central

    Westerly, David C.; Mo, Xiaohu; Tomé, Wolfgang A.; Mackie, Thomas R.; DeLuca, Paul M.

    2013-01-01

    Purpose: Pencil beam algorithms are commonly used for proton therapy dose calculations. Szymanowski and Oelfke [“Two-dimensional pencil beam scaling: An improved proton dose algorithm for heterogeneous media,” Phys. Med. Biol. 47, 3313–3330 (2002)10.1088/0031-9155/47/18/304] developed a two-dimensional (2D) scaling algorithm which accurately models the radial pencil beam width as a function of depth in heterogeneous slab geometries using a scaled expression for the radial kernel width in water as a function of depth and kinetic energy. However, an assumption made in the derivation of the technique limits its range of validity to cases where the input expression for the radial kernel width in water is derived from a local scattering power model. The goal of this work is to derive a generalized form of 2D pencil beam scaling that is independent of the scattering power model and appropriate for use with any expression for the radial kernel width in water as a function of depth. Methods: Using Fermi-Eyges transport theory, the authors derive an expression for the radial pencil beam width in heterogeneous slab geometries which is independent of the proton scattering power and related quantities. The authors then perform test calculations in homogeneous and heterogeneous slab phantoms using both the original 2D scaling model and the new model with expressions for the radial kernel width in water computed from both local and nonlocal scattering power models, as well as a nonlocal parameterization of Molière scattering theory. In addition to kernel width calculations, dose calculations are also performed for a narrow Gaussian proton beam. Results: Pencil beam width calculations indicate that both 2D scaling formalisms perform well when the radial kernel width in water is derived from a local scattering power model. Computing the radial kernel width from a nonlocal scattering model results in the local 2D scaling formula under-predicting the pencil beam width by as

  2. Effect of nonlinear three-dimensional optimized reconstruction algorithm filter on image quality and radiation dose: Validation on phantoms

    SciTech Connect

    Bai Mei; Chen Jiuhong; Raupach, Rainer; Suess, Christoph; Tao Ying; Peng Mingchen

    2009-01-15

    A new technique called the nonlinear three-dimensional optimized reconstruction algorithm filter (3D ORA filter) is currently used to improve CT image quality and reduce radiation dose. This technical note describes the comparison of image noise, slice sensitivity profile (SSP), contrast-to-noise ratio, and modulation transfer function (MTF) on phantom images processed with and without the 3D ORA filter, and the effect of the 3D ORA filter on CT images at a reduced dose. For CT head scans the noise reduction was up to 54% with typical bone reconstruction algorithms (H70) and a 0.6 mm slice thickness; for liver CT scans the noise reduction was up to 30% with typical high-resolution reconstruction algorithms (B70) and a 0.6 mm slice thickness. MTF and SSP did not change significantly with the application of 3D ORA filtering (P>0.05), whereas noise was reduced (P<0.05). The low contrast detectability and MTF of images obtained at a reduced dose and filtered by the 3D ORA were equivalent to those of standard dose CT images; there was no significant difference in image noise of scans taken at a reduced dose, filtered using 3D ORA and standard dose CT (P>0.05). The 3D ORA filter shows good potential for reducing image noise without affecting image quality attributes such as sharpness. By applying this approach, the same image quality can be achieved whilst gaining a marked dose reduction.

  3. A 3D pencil-beam-based superposition algorithm for photon dose calculation in heterogeneous media

    NASA Astrophysics Data System (ADS)

    Tillikainen, L.; Helminen, H.; Torsti, T.; Siljamäki, S.; Alakuijala, J.; Pyyry, J.; Ulmer, W.

    2008-07-01

    In this work, a novel three-dimensional superposition algorithm for photon dose calculation is presented. The dose calculation is performed as a superposition of pencil beams, which are modified based on tissue electron densities. The pencil beams have been derived from Monte Carlo simulations, and are separated into lateral and depth-directed components. The lateral component is modeled using exponential functions, which allows accurate modeling of lateral scatter in heterogeneous tissues. The depth-directed component represents the total energy deposited on each plane, which is spread out using the lateral scatter functions. Finally, convolution in the depth direction is applied to account for tissue interface effects. The method can be used with the previously introduced multiple-source model for clinical settings. The method was compared against Monte Carlo simulations in several phantoms including lung- and bone-type heterogeneities. Comparisons were made for several field sizes for 6 and 18 MV energies. The deviations were generally within (2%, 2 mm) of the field central axis dmax. Significantly larger deviations (up to 8%) were found only for the smallest field in the lung slab phantom for 18 MV. The presented method was found to be accurate in a wide range of conditions making it suitable for clinical planning purposes.

  4. Feasibility study of a simple approximation algorithm for in-vivo dose reconstruction by using the transit dose measured using an EPID

    NASA Astrophysics Data System (ADS)

    Hwang, Ui-Jung; Song, Mi Hee; Baek, Tae Seong; Chung, Eun Ji; Yoon, Myonggeun

    2015-02-01

    The purpose of this study is to verify the accuracy of the dose delivered to the patient during intensity-modulated radiation therapy (IMRT) by using in-vivo dosimetry and to avoid accidental exposure to healthy tissues and organs close to tumors. The in-vivo dose was reconstructed by back projection of the transit dose with a simple approximation that considered only the percent depth dose and inverse square law. While the average gamma index for comparisons of dose distributions between the calculated dose map and the film measurement was less than the one for 96.3% of all pixels with the homogeneous phantom, the passing rate was reduced to 92.8% with the inhomogeneous phantom, suggesting that the reduction was apparently due to the inaccuracy of the reconstruction algorithm for inhomogeneity. The proposed method of calculating the dose inside a phantom was of comparable or better accuracy than the treatment planning system, suggesting that it can be used to verify the accuracy of the dose delivered to the patient during treatment.

  5. Clinical implementation of a digital tomosynthesis-based seed reconstruction algorithm for intraoperative postimplant dose evaluation in low dose rate prostate brachytherapy

    SciTech Connect

    Brunet-Benkhoucha, Malik; Verhaegen, Frank; Lassalle, Stephanie; Beliveau-Nadeau, Dominic; Reniers, Brigitte; Donath, David; Taussky, Daniel; Carrier, Jean-Francois

    2009-11-15

    Purpose: The low dose rate brachytherapy procedure would benefit from an intraoperative postimplant dosimetry verification technique to identify possible suboptimal dose coverage and suggest a potential reimplantation. The main objective of this project is to develop an efficient, operator-free, intraoperative seed detection technique using the imaging modalities available in a low dose rate brachytherapy treatment room. Methods: This intraoperative detection allows a complete dosimetry calculation that can be performed right after an I-125 prostate seed implantation, while the patient is still under anesthesia. To accomplish this, a digital tomosynthesis-based algorithm was developed. This automatic filtered reconstruction of the 3D volume requires seven projections acquired over a total angle of 60 deg. with an isocentric imaging system. Results: A phantom study was performed to validate the technique that was used in a retrospective clinical study involving 23 patients. In the patient study, the automatic tomosynthesis-based reconstruction yielded seed detection rates of 96.7% and 2.6% false positives. The seed localization error obtained with a phantom study is 0.4{+-}0.4 mm. The average time needed for reconstruction is below 1 min. The reconstruction algorithm also provides the seed orientation with an uncertainty of 10 deg. {+-}8 deg. The seed detection algorithm presented here is reliable and was efficiently used in the clinic. Conclusions: When combined with an appropriate coregistration technique to identify the organs in the seed coordinate system, this algorithm will offer new possibilities for a next generation of clinical brachytherapy systems.

  6. Accuracy of pencil-beam redefinition algorithm dose calculations in patient-like cylindrical phantoms for bolus electron conformal therapy

    SciTech Connect

    Carver, Robert L.; Hogstrom, Kenneth R.; Chu, Connel; Fields, Robert S.; Sprunger, Conrad P.

    2013-07-15

    Purpose: The purpose of this study was to document the improved accuracy of the pencil beam redefinition algorithm (PBRA) compared to the pencil beam algorithm (PBA) for bolus electron conformal therapy using cylindrical patient phantoms based on patient computed tomography (CT) scans of retromolar trigone and nose cancer.Methods: PBRA and PBA electron dose calculations were compared with measured dose in retromolar trigone and nose phantoms both with and without bolus. For the bolus treatment plans, a radiation oncologist outlined a planning target volume (PTV) on the central axis slice of the CT scan for each phantom. A bolus was designed using the planning.decimal{sup Registered-Sign} (p.d) software (.decimal, Inc., Sanford, FL) to conform the 90% dose line to the distal surface of the PTV. Dose measurements were taken with thermoluminescent dosimeters placed into predrilled holes. The Pinnacle{sup 3} (Philips Healthcare, Andover, MD) treatment planning system was used to calculate PBA dose distributions. The PBRA dose distributions were calculated with an in-house C++ program. In order to accurately account for the phantom materials a table correlating CT number to relative electron stopping and scattering powers was compiled and used for both PBA and PBRA dose calculations. Accuracy was determined by comparing differences in measured and calculated dose, as well as distance to agreement for each measurement point.Results: The measured doses had an average precision of 0.9%. For the retromolar trigone phantom, the PBRA dose calculations had an average {+-}1{sigma} dose difference (calculated - measured) of -0.65%{+-} 1.62% without the bolus and -0.20%{+-} 1.54% with the bolus. The PBA dose calculation had an average dose difference of 0.19%{+-} 3.27% without the bolus and -0.05%{+-} 3.14% with the bolus. For the nose phantom, the PBRA dose calculations had an average dose difference of 0.50%{+-} 3.06% without bolus and -0.18%{+-} 1.22% with the bolus. The PBA

  7. Comparison of Nine Statistical Model Based Warfarin Pharmacogenetic Dosing Algorithms Using the Racially Diverse International Warfarin Pharmacogenetic Consortium Cohort Database

    PubMed Central

    Liu, Rong; Li, Xi; Zhang, Wei; Zhou, Hong-Hao

    2015-01-01

    Objective Multiple linear regression (MLR) and machine learning techniques in pharmacogenetic algorithm-based warfarin dosing have been reported. However, performances of these algorithms in racially diverse group have never been objectively evaluated and compared. In this literature-based study, we compared the performances of eight machine learning techniques with those of MLR in a large, racially-diverse cohort. Methods MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied in warfarin dose algorithms in a cohort from the International Warfarin Pharmacogenetics Consortium database. Covariates obtained by stepwise regression from 80% of randomly selected patients were used to develop algorithms. To compare the performances of these algorithms, the mean percentage of patients whose predicted dose fell within 20% of the actual dose (mean percentage within 20%) and the mean absolute error (MAE) were calculated in the remaining 20% of patients. The performances of these techniques in different races, as well as the dose ranges of therapeutic warfarin were compared. Robust results were obtained after 100 rounds of resampling. Results BART, MARS and SVR were statistically indistinguishable and significantly out performed all the other approaches in the whole cohort (MAE: 8.84–8.96 mg/week, mean percentage within 20%: 45.88%–46.35%). In the White population, MARS and BART showed higher mean percentage within 20% and lower mean MAE than those of MLR (all p values < 0.05). In the Asian population, SVR, BART, MARS and LAR performed the same as MLR. MLR and LAR optimally performed among the Black population. When patients were grouped in terms of warfarin dose range, all machine learning techniques except ANN and LAR showed significantly

  8. TH-E-BRE-07: Development of Dose Calculation Error Predictors for a Widely Implemented Clinical Algorithm

    SciTech Connect

    Egan, A; Laub, W

    2014-06-15

    Purpose: Several shortcomings of the current implementation of the analytic anisotropic algorithm (AAA) may lead to dose calculation errors in highly modulated treatments delivered to highly heterogeneous geometries. Here we introduce a set of dosimetric error predictors that can be applied to a clinical treatment plan and patient geometry in order to identify high risk plans. Once a problematic plan is identified, the treatment can be recalculated with more accurate algorithm in order to better assess its viability. Methods: Here we focus on three distinct sources dosimetric error in the AAA algorithm. First, due to a combination of discrepancies in smallfield beam modeling as well as volume averaging effects, dose calculated through small MLC apertures can be underestimated, while that behind small MLC blocks can overestimated. Second, due the rectilinear scaling of the Monte Carlo generated pencil beam kernel, energy is not properly transported through heterogeneities near, but not impeding, the central axis of the beamlet. And third, AAA overestimates dose in regions very low density (< 0.2 g/cm{sup 3}). We have developed an algorithm to detect the location and magnitude of each scenario within the patient geometry, namely the field-size index (FSI), the heterogeneous scatter index (HSI), and the lowdensity index (LDI) respectively. Results: Error indices successfully identify deviations between AAA and Monte Carlo dose distributions in simple phantom geometries. Algorithms are currently implemented in the MATLAB computing environment and are able to run on a typical RapidArc head and neck geometry in less than an hour. Conclusion: Because these error indices successfully identify each type of error in contrived cases, with sufficient benchmarking, this method can be developed into a clinical tool that may be able to help estimate AAA dose calculation errors and when it might be advisable to use Monte Carlo calculations.

  9. Development of a phantom to validate high-dose-rate brachytherapy treatment planning systems with heterogeneous algorithms

    SciTech Connect

    Moura, Eduardo S.; Rostelato, Maria Elisa C. M.; Zeituni, Carlos A.

    2015-04-15

    Purpose: This work presents the development of a phantom to verify the treatment planning system (TPS) algorithms used for high-dose-rate (HDR) brachytherapy. It is designed to measure the relative dose in a heterogeneous media. The experimental details used, simulation methods, and comparisons with a commercial TPS are also provided. Methods: To simulate heterogeneous conditions, four materials were used: Virtual Water™ (VM), BR50/50™, cork, and aluminum. The materials were arranged in 11 heterogeneity configurations. Three dosimeters were used to measure the relative response from a HDR {sup 192}Ir source: TLD-100™, Gafchromic{sup ®} EBT3 film, and an Exradin™ A1SL ionization chamber. To compare the results from the experimental measurements, the various configurations were modeled in the PENELOPE/penEasy Monte Carlo code. Images of each setup geometry were acquired from a CT scanner and imported into BrachyVision™ TPS software, which includes a grid-based Boltzmann solver Acuros™. The results of the measurements performed in the heterogeneous setups were normalized to the dose values measured in the homogeneous Virtual Water™ setup and the respective differences due to the heterogeneities were considered. Additionally, dose values calculated based on the American Association of Physicists in Medicine-Task Group 43 formalism were compared to dose values calculated with the Acuros™ algorithm in the phantom. Calculated doses were compared at the same points, where measurements have been performed. Results: Differences in the relative response as high as 11.5% were found from the homogeneous setup when the heterogeneous materials were inserted into the experimental phantom. The aluminum and cork materials produced larger differences than the plastic materials, with the BR50/50™ material producing results similar to the Virtual Water™ results. Our experimental methods agree with the PENELOPE/penEasy simulations for most setups and dosimeters. The

  10. Assessing image quality and dose reduction of a new x-ray computed tomography iterative reconstruction algorithm using model observers

    SciTech Connect

    Tseng, Hsin-Wu Kupinski, Matthew A.; Fan, Jiahua; Sainath, Paavana; Hsieh, Jiang

    2014-07-15

    Purpose: A number of different techniques have been developed to reduce radiation dose in x-ray computed tomography (CT) imaging. In this paper, the authors will compare task-based measures of image quality of CT images reconstructed by two algorithms: conventional filtered back projection (FBP), and a new iterative reconstruction algorithm (IR). Methods: To assess image quality, the authors used the performance of a channelized Hotelling observer acting on reconstructed image slices. The selected channels are dense difference Gaussian channels (DDOG).A body phantom and a head phantom were imaged 50 times at different dose levels to obtain the data needed to assess image quality. The phantoms consisted of uniform backgrounds with low contrast signals embedded at various locations. The tasks the observer model performed included (1) detection of a signal of known location and shape, and (2) detection and localization of a signal of known shape. The employed DDOG channels are based on the response of the human visual system. Performance was assessed using the areas under ROC curves and areas under localization ROC curves. Results: For signal known exactly (SKE) and location unknown/signal shape known tasks with circular signals of different sizes and contrasts, the authors’ task-based measures showed that a FBP equivalent image quality can be achieved at lower dose levels using the IR algorithm. For the SKE case, the range of dose reduction is 50%–67% (head phantom) and 68%–82% (body phantom). For the study of location unknown/signal shape known, the dose reduction range can be reached at 67%–75% for head phantom and 67%–77% for body phantom case. These results suggest that the IR images at lower dose settings can reach the same image quality when compared to full dose conventional FBP images. Conclusions: The work presented provides an objective way to quantitatively assess the image quality of a newly introduced CT IR algorithm. The performance of the

  11. Influence of radiation dose and reconstruction algorithm in MDCT assessment of airway wall thickness: A phantom study

    SciTech Connect

    Gomez-Cardona, Daniel; Nagle, Scott K.; Li, Ke; Chen, Guang-Hong; Robinson, Terry E.

    2015-10-15

    Purpose: Wall thickness (WT) is an airway feature of great interest for the assessment of morphological changes in the lung parenchyma. Multidetector computed tomography (MDCT) has recently been used to evaluate airway WT, but the potential risk of radiation-induced carcinogenesis—particularly in younger patients—might limit a wider use of this imaging method in clinical practice. The recent commercial implementation of the statistical model-based iterative reconstruction (MBIR) algorithm, instead of the conventional filtered back projection (FBP) algorithm, has enabled considerable radiation dose reduction in many other clinical applications of MDCT. The purpose of this work was to study the impact of radiation dose and MBIR in the MDCT assessment of airway WT. Methods: An airway phantom was scanned using a clinical MDCT system (Discovery CT750 HD, GE Healthcare) at 4 kV levels and 5 mAs levels. Both FBP and a commercial implementation of MBIR (Veo{sup TM}, GE Healthcare) were used to reconstruct CT images of the airways. For each kV–mAs combination and each reconstruction algorithm, the contrast-to-noise ratio (CNR) of the airways was measured, and the WT of each airway was measured and compared with the nominal value; the relative bias and the angular standard deviation in the measured WT were calculated. For each airway and reconstruction algorithm, the overall performance of WT quantification across all of the 20 kV–mAs combinations was quantified by the sum of squares (SSQs) of the difference between the measured and nominal WT values. Finally, the particular kV–mAs combination and reconstruction algorithm that minimized radiation dose while still achieving a reference WT quantification accuracy level was chosen as the optimal acquisition and reconstruction settings. Results: The wall thicknesses of seven airways of different sizes were analyzed in the study. Compared with FBP, MBIR improved the CNR of the airways, particularly at low radiation dose

  12. Extracting Gene Networks for Low-Dose Radiation Using Graph Theoretical Algorithms

    PubMed Central

    Voy, Brynn H; Scharff, Jon A; Perkins, Andy D; Saxton, Arnold M; Borate, Bhavesh; Chesler, Elissa J; Branstetter, Lisa K; Langston, Michael A

    2006-01-01

    Genes with common functions often exhibit correlated expression levels, which can be used to identify sets of interacting genes from microarray data. Microarrays typically measure expression across genomic space, creating a massive matrix of co-expression that must be mined to extract only the most relevant gene interactions. We describe a graph theoretical approach to extracting co-expressed sets of genes, based on the computation of cliques. Unlike the results of traditional clustering algorithms, cliques are not disjoint and allow genes to be assigned to multiple sets of interacting partners, consistent with biological reality. A graph is created by thresholding the correlation matrix to include only the correlations most likely to signify functional relationships. Cliques computed from the graph correspond to sets of genes for which significant edges are present between all members of the set, representing potential members of common or interacting pathways. Clique membership can be used to infer function about poorly annotated genes, based on the known functions of better-annotated genes with which they share clique membership (i.e., “guilt-by-association”). We illustrate our method by applying it to microarray data collected from the spleens of mice exposed to low-dose ionizing radiation. Differential analysis is used to identify sets of genes whose interactions are impacted by radiation exposure. The correlation graph is also queried independently of clique to extract edges that are impacted by radiation. We present several examples of multiple gene interactions that are altered by radiation exposure and thus represent potential molecular pathways that mediate the radiation response. PMID:16854212

  13. Extracting gene networks for low-dose radiation using graph theoretical algorithms.

    PubMed

    Voy, Brynn H; Scharff, Jon A; Perkins, Andy D; Saxton, Arnold M; Borate, Bhavesh; Chesler, Elissa J; Branstetter, Lisa K; Langston, Michael A

    2006-07-21

    Genes with common functions often exhibit correlated expression levels, which can be used to identify sets of interacting genes from microarray data. Microarrays typically measure expression across genomic space, creating a massive matrix of co-expression that must be mined to extract only the most relevant gene interactions. We describe a graph theoretical approach to extracting co-expressed sets of genes, based on the computation of cliques. Unlike the results of traditional clustering algorithms, cliques are not disjoint and allow genes to be assigned to multiple sets of interacting partners, consistent with biological reality. A graph is created by thresholding the correlation matrix to include only the correlations most likely to signify functional relationships. Cliques computed from the graph correspond to sets of genes for which significant edges are present between all members of the set, representing potential members of common or interacting pathways. Clique membership can be used to infer function about poorly annotated genes, based on the known functions of better-annotated genes with which they share clique membership (i.e., "guilt-by-association"). We illustrate our method by applying it to microarray data collected from the spleens of mice exposed to low-dose ionizing radiation. Differential analysis is used to identify sets of genes whose interactions are impacted by radiation exposure. The correlation graph is also queried independently of clique to extract edges that are impacted by radiation. We present several examples of multiple gene interactions that are altered by radiation exposure and thus represent potential molecular pathways that mediate the radiation response. PMID:16854212

  14. SU-E-T-477: An Efficient Dose Correction Algorithm Accounting for Tissue Heterogeneities in LDR Brachytherapy

    SciTech Connect

    Mashouf, S; Lai, P; Karotki, A; Keller, B; Beachey, D; Pignol, J

    2014-06-01

    Purpose: Seed brachytherapy is currently used for adjuvant radiotherapy of early stage prostate and breast cancer patients. The current standard for calculation of dose surrounding the brachytherapy seeds is based on American Association of Physicist in Medicine Task Group No. 43 (TG-43 formalism) which generates the dose in homogeneous water medium. Recently, AAPM Task Group No. 186 emphasized the importance of accounting for tissue heterogeneities. This can be done using Monte Carlo (MC) methods, but it requires knowing the source structure and tissue atomic composition accurately. In this work we describe an efficient analytical dose inhomogeneity correction algorithm implemented using MIM Symphony treatment planning platform to calculate dose distributions in heterogeneous media. Methods: An Inhomogeneity Correction Factor (ICF) is introduced as the ratio of absorbed dose in tissue to that in water medium. ICF is a function of tissue properties and independent of source structure. The ICF is extracted using CT images and the absorbed dose in tissue can then be calculated by multiplying the dose as calculated by the TG-43 formalism times ICF. To evaluate the methodology, we compared our results with Monte Carlo simulations as well as experiments in phantoms with known density and atomic compositions. Results: The dose distributions obtained through applying ICF to TG-43 protocol agreed very well with those of Monte Carlo simulations as well as experiments in all phantoms. In all cases, the mean relative error was reduced by at least 50% when ICF correction factor was applied to the TG-43 protocol. Conclusion: We have developed a new analytical dose calculation method which enables personalized dose calculations in heterogeneous media. The advantages over stochastic methods are computational efficiency and the ease of integration into clinical setting as detailed source structure and tissue segmentation are not needed. University of Toronto, Natural Sciences and

  15. Stereotactic, Single-Dose Irradiation of Lung Tumors: A Comparison of Absolute Dose and Dose Distribution Between Pencil Beam and Monte Carlo Algorithms Based on Actual Patient CT Scans

    SciTech Connect

    Chen Huixiao; Lohr, Frank; Fritz, Peter; Wenz, Frederik; Dobler, Barbara; Lorenz, Friedlieb; Muehlnickel, Werner

    2010-11-01

    Purpose: Dose calculation based on pencil beam (PB) algorithms has its shortcomings predicting dose in tissue heterogeneities. The aim of this study was to compare dose distributions of clinically applied non-intensity-modulated radiotherapy 15-MV plans for stereotactic body radiotherapy between voxel Monte Carlo (XVMC) calculation and PB calculation for lung lesions. Methods and Materials: To validate XVMC, one treatment plan was verified in an inhomogeneous thorax phantom with EDR2 film (Eastman Kodak, Rochester, NY). Both measured and calculated (PB and XVMC) dose distributions were compared regarding profiles and isodoses. Then, 35 lung plans originally created for clinical treatment by PB calculation with the Eclipse planning system (Varian Medical Systems, Palo Alto, CA) were recalculated by XVMC (investigational implementation in PrecisePLAN [Elekta AB, Stockholm, Sweden]). Clinically relevant dose-volume parameters for target and lung tissue were compared and analyzed statistically. Results: The XVMC calculation agreed well with film measurements (<1% difference in lateral profile), whereas the deviation between PB calculation and film measurements was up to +15%. On analysis of 35 clinical cases, the mean dose, minimal dose and coverage dose value for 95% volume of gross tumor volume were 1.14 {+-} 1.72 Gy, 1.68 {+-} 1.47 Gy, and 1.24 {+-} 1.04 Gy lower by XVMC compared with PB, respectively (prescription dose, 30 Gy). The volume covered by the 9 Gy isodose of lung was 2.73% {+-} 3.12% higher when calculated by XVMC compared with PB. The largest differences were observed for small lesions circumferentially encompassed by lung tissue. Conclusions: Pencil beam dose calculation overestimates dose to the tumor and underestimates lung volumes exposed to a given dose consistently for 15-MV photons. The degree of difference between XVMC and PB is tumor size and location dependent. Therefore XVMC calculation is helpful to further optimize treatment planning.

  16. Comparison of dose calculation algorithms in phantoms with lung equivalent heterogeneities under conditions of lateral electronic disequilibrium.

    PubMed

    Carrasco, P; Jornet, N; Duch, M A; Weber, L; Ginjaume, M; Eudaldo, T; Jurado, D; Ruiz, A; Ribas, M

    2004-10-01

    An extensive set of benchmark measurement of PDDs and beam profiles was performed in a heterogeneous layer phantom, including a lung equivalent heterogeneity, by means of several detectors and compared against the predicted dose values by different calculation algorithms in two treatment planning systems. PDDs were measured with TLDs, plane parallel and cylindrical ionization chambers and beam profiles with films. Additionally, Monte Carlo simulations by means of the PENELOPE code were performed. Four different field sizes (10 x 10, 5 x 5, 2 x 2, and 1 x 1 cm2) and two lung equivalent materials (CIRS, p(w)e=0.195 and St. Bartholomew Hospital, London, p(w)e=0.244-0.322) were studied. The performance of four correction-based algorithms and one based on convolution-superposition was analyzed. The correction-based algorithms were the Batho, the Modified Batho, and the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system and the TMS Pencil Beam from the Helax-TMS (Nucletron) treatment planning system. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. The only studied calculation methods that correlated successfully with the measured values with a 2% average inside all media were the Collapsed Cone and the Monte Carlo simulation. The biggest difference between the predicted and the delivered dose in the beam axis was found for the EqTAR algorithm inside the CIRS lung equivalent material in a 2 x 2 cm2 18 MV x-ray beam. In these conditions, average and maximum difference against the TLD measurements were 32% and 39%, respectively. In the water equivalent part of the phantom every algorithm correctly predicted the dose (within 2%) everywhere except very close to the interfaces where differences up to 24% were found for 2 x 2 cm2 18 MV photon beams. Consistent values were found between the reference detector (ionization chamber in water and TLD in lung) and Monte Carlo simulations, yielding minimal

  17. Comparison of dose calculation algorithms in phantoms with lung equivalent heterogeneities under conditions of lateral electronic disequilibrium

    SciTech Connect

    Carrasco, P.; Jornet, N.; Duch, M.A.; Weber, L.; Ginjaume, M.; Eudaldo, T.; Jurado, D.; Ruiz, A.; Ribas, M.

    2004-10-01

    An extensive set of benchmark measurement of PDDs and beam profiles was performed in a heterogeneous layer phantom, including a lung equivalent heterogeneity, by means of several detectors and compared against the predicted dose values by different calculation algorithms in two treatment planning systems. PDDs were measured with TLDs, plane parallel and cylindrical ionization chambers and beam profiles with films. Additionally, Monte Carlo simulations by meansof the PENELOPE code were performed. Four different field sizes (10x10, 5x5, 2x2, and1x1 cm{sup 2}) and two lung equivalent materials (CIRS, {rho}{sub e}{sup w}=0.195 and St. Bartholomew Hospital, London, {rho}{sub e}{sup w}=0.244-0.322) were studied. The performance of four correction-based algorithms and one based on convolution-superposition was analyzed. The correction-based algorithms were the Batho, the Modified Batho, and the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system and the TMS Pencil Beam from the Helax-TMS (Nucletron) treatment planning system. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. The only studied calculation methods that correlated successfully with the measured values with a 2% average inside all media were the Collapsed Cone and the Monte Carlo simulation. The biggest difference between the predicted and the delivered dose in the beam axis was found for the EqTAR algorithm inside the CIRS lung equivalent material in a 2x2 cm{sup 2} 18 MV x-ray beam. In these conditions, average and maximum difference against the TLD measurements were 32% and 39%, respectively. In the water equivalent part of the phantom every algorithm correctly predicted the dose (within 2%) everywhere except very close to the interfaces where differences up to 24% were found for 2x2 cm{sup 2} 18 MV photon beams. Consistent values were found between the reference detector (ionization chamber in water and TLD in lung) and Monte Carlo

  18. Optimizing clopidogrel dose response: a new clinical algorithm comprising CYP2C19 pharmacogenetics and drug interactions

    PubMed Central

    Saab, Yolande B; Zeenny, Rony; Ramadan, Wijdan H

    2015-01-01

    Purpose Response to clopidogrel varies widely with nonresponse rates ranging from 4% to 30%. A reduced function of the gene variant of the CYP2C19 has been associated with lower drug metabolite levels, and hence diminished platelet inhibition. Drugs that alter CYP2C19 activity may also mimic genetic variants. The aim of the study is to investigate the cumulative effect of CYP2C19 gene polymorphisms and drug interactions that affects clopidogrel dosing, and apply it into a new clinical-pharmacogenetic algorithm that can be used by clinicians in optimizing clopidogrel-based treatment. Method Clopidogrel dose optimization was analyzed based on two main parameters that affect clopidogrel metabolite area under the curve: different CYP2C19 genotypes and concomitant drug intake. Clopidogrel adjusted dose was computed based on area under the curve ratios for different CYP2C19 genotypes when a drug interacting with CYP2C19 is added to clopidogrel treatment. A clinical-pharmacogenetic algorithm was developed based on whether clopidogrel shows 1) expected effect as per indication, 2) little or no effect, or 3) clinical features that patients experience and fit with clopidogrel adverse drug reactions. Results The study results show that all patients under clopidogrel treatment, whose genotypes are different from *1*1, and concomitantly taking other drugs metabolized by CYP2C19 require clopidogrel dose adjustment. To get a therapeutic effect and avoid adverse drug reactions, therapeutic dose of 75 mg clopidogrel, for example, should be lowered to 6 mg or increased to 215 mg in patients with different genotypes. Conclusion The implementation of clopidogrel new algorithm has the potential to maximize the benefit of clopidogrel pharmacological therapy. Clinicians would be able to personalize treatment to enhance efficacy and limit toxicity. PMID:26445541

  19. Effects of computational phantoms on the effective dose and two-dosimeter algorithm for external photon beams.

    PubMed

    Karimi-Shahri, K; Rafat-Motavalli, L; Miri-Hakimabad, H; Liu, L; Li, J

    2016-09-01

    In this study, the effect of computational phantoms on the effective dose (E), dosimeter responses positioned on the front (chest) and back of phantom, and two-dosimeter algorithm was investigated for external photon beams. This study was performed using Korean Typical MAN-2 (KTMAN-2), Chinese Reference Adult Male (CRAM), ICRP male reference, and Male Adult meSH (MASH) reference phantoms. Calculations were performed for beam directions in different polar and azimuthal angles using the Monte Carlo code of MCNP at energies of 0.08, 0.3, and 1MeV. Results show that the body shape significantly affects E and two-dosimeter responses when the dosimeters are indirectly irradiated. The acquired two-dosimeter algorithms are almost the same for all the mentioned phantoms except for KTMAN-2. Comparisons between the obtained E and estimated E (Eest), acquired from two-dosimeter algorithm, illustrate that the Eest is overestimated in overhead (OH) and underfoot (UF) directions. The effect of using one algorithm for all phantoms was also investigated. Results show that application of one algorithm to all reference phantoms is possible. PMID:27389880

  20. A Matter of Timing: Identifying Significant Multi-Dose Radiotherapy Improvements by Numerical Simulation and Genetic Algorithm Search

    PubMed Central

    Angus, Simon D.; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17–18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost

  1. Prospective Evaluation of Prior Image Constrained Compressed Sensing (PICCS) Algorithm in Abdominal CT: A comparison of reduced dose with standard dose imaging

    PubMed Central

    Lubner, Meghan G.; Pickhardt, Perry J.; Kim, David H.; Tang, Jie; Munoz del Rio, Alejandro; Chen, Guang-Hong

    2014-01-01

    Purpose To prospectively study CT dose reduction using the “prior image constrained compressed sensing” (PICCS) reconstruction technique. Methods Immediately following routine standard dose (SD) abdominal MDCT, 50 patients (mean age, 57.7 years; mean BMI, 28.8) underwent a second reduced-dose (RD) scan (targeted dose reduction, 70-90%). DLP, CTDIvol and SSDE were compared. Several reconstruction algorithms (FBP, ASIR, and PICCS) were applied to the RD series. SD images with FBP served as reference standard. Two blinded readers evaluated each series for subjective image quality and focal lesion detection. Results Mean DLP, CTDIvol, and SSDE for RD series was 140.3 mGy*cm (median 79.4), 3.7 mGy (median 1.8), and 4.2 mGy (median 2.3) compared with 493.7 mGy*cm (median 345.8), 12.9 mGy (median 7.9 mGy) and 14.6 mGy (median 10.1) for SD series, respectively. Mean effective patient diameter was 30.1 cm (median 30), which translates to a mean SSDE reduction of 72% (p<0.001). RD-PICCS image quality score was 2.8±0.5, improved over the RD-FBP (1.7±0.7) and RD-ASIR(1.9±0.8)(p<0.001), but lower than SD (3.5±0.5)(p<0.001). Readers detected 81% (184/228) of focal lesions on RD-PICCS series, versus 67% (153/228) and 65% (149/228) for RD-FBP and RD-ASIR, respectively. Mean image noise was significantly reduced on RD-PICCS series (13.9 HU) compared with RD-FBP (57.2) and RD-ASIR (44.1) (p<0.001). Conclusion PICCS allows for marked dose reduction at abdominal CT with improved image quality and diagnostic performance over reduced-dose FBP and ASIR. Further study is needed to determine indication-specific dose reduction levels that preserve acceptable diagnostic accuracy relative to higher-dose protocols. PMID:24943136

  2. Comparison of build-up region doses in oblique tangential 6 MV photon beams calculated by AAA and CCC algorithms in breast Rando phantom

    NASA Astrophysics Data System (ADS)

    Masunun, P.; Tangboonduangjit, P.; Dumrongkijudom, N.

    2016-03-01

    The purpose of this study is to compare the build-up region doses on breast Rando phantom surface with the bolus covered, the doses in breast Rando phantom and also the doses in a lung that is the heterogeneous region by two algorithms. The AAA in Eclipse TPS and the collapsed cone convolution algorithm in Pinnacle treatment planning system were used to plan in tangential field technique with 6 MV photon beam at 200 cGy total doses in Breast Rando phantom with bolus covered (5 mm and 10 mm). TLDs were calibrated with Cobalt-60 and used to measure the doses in irradiation process. The results in treatment planning show that the doses in build-up region and the doses in breast phantom were closely matched in both algorithms which are less than 2% differences. However, overestimate of doses in a lung (L2) were found in AAA with 13.78% and 6.06% differences at 5 mm and 10 mm bolus thickness, respectively when compared with CCC algorithm. The TLD measurements show the underestimate in buildup region and in breast phantom but the doses in a lung (L2) were overestimated when compared with the doses in the two plannings at both thicknesses of the bolus.

  3. Algorithms used in heterogeneous dose calculations show systematic differences as measured with the Radiological Physics Center’s anthropomorphic thorax phantom used for RTOG credentialing

    PubMed Central

    Kry, Stephen F.; Alvarez, Paola; Molineu, Andrea; Amador, Carrie; Galvin, James; Followill, David S.

    2012-01-01

    Purpose To determine the impact of treatment planning algorithm on the accuracy of heterogeneous dose calculations in the Radiological Physics Center (RPC) thorax phantom. Methods and Materials We retrospectively analyzed the results of 304 irradiations of the RPC thorax phantom at 221 different institutions as part of credentialing for RTOG clinical trials; the irradiations were all done using 6-MV beams. Treatment plans included those for intensity-modulated radiation therapy (IMRT) as well as 3D conformal therapy (3D CRT). Heterogeneous plans were developed using Monte Carlo (MC), convolution/superposition (CS) and the anisotropic analytic algorithm (AAA), as well as pencil beam (PB) algorithms. For each plan and delivery, the absolute dose measured in the center of a lung target was compared to the calculated dose, as was the planar dose in 3 orthogonal planes. The difference between measured and calculated dose was examined as a function of planning algorithm as well as use of IMRT. Results PB algorithms overestimated the dose delivered to the center of the target by 4.9% on average. Surprisingly, CS algorithms and AAA also showed a systematic overestimation of the dose to the center of the target, by 3.7% on average. In contrast, the MC algorithm dose calculations agreed with measurement within 0.6% on average. There was no difference observed between IMRT and 3D CRT calculation accuracy. Conclusion Unexpectedly, advanced treatment planning systems (those using CS and AAA algorithms) overestimated the dose that was delivered to the lung target. This issue requires attention in terms of heterogeneity calculations and potentially in terms of clinical practice. PMID:23237006

  4. An algorithm for kilovoltage x-ray dose calculations with applications in kV-CBCT scans and 2D planar projected radiographs

    NASA Astrophysics Data System (ADS)

    Pawlowski, Jason M.; Ding, George X.

    2014-04-01

    A new model-based dose calculation algorithm is presented for kilovoltage x-rays and is tested for the cases of calculating the radiation dose from kilovoltage cone-beam CT (kV-CBCT) and 2D planar projected radiographs. This algorithm calculates the radiation dose to water-like media as the sum of primary and scattered dose components. The scatter dose is calculated by convolution of a newly introduced, empirically parameterized scatter dose kernel with the primary photon fluence. Several approximations are introduced to increase the scatter dose calculation efficiency: (1) the photon energy spectrum is approximated as monoenergetic; (2) density inhomogeneities are accounted for by implementing a global distance scaling factor in the scatter kernel; (3) kernel tilting is ignored. These approximations allow for efficient calculation of the scatter dose convolution with the fast Fourier transform. Monte Carlo simulations were used to obtain the model parameters. The accuracy of using this model-based algorithm was validated by comparing with the Monte Carlo method for calculating dose distributions for real patients resulting from radiotherapy image guidance procedures including volumetric kV-CBCT scans and 2D planar projected radiographs. For all patients studied, mean dose-to-water errors for kV-CBCT are within 0.3% with a maximum standard deviation error of 4.1%. Using a medium-dependent correction method to account for the effects of photoabsorption in bone on the dose distribution, mean dose-to-medium errors for kV-CBCT are within 3.6% for bone and 2.4% for soft tissues. This algorithm offers acceptable accuracy and has the potential to extend the applicability of model-based dose calculation algorithms from megavoltage to kilovoltage photon beams.

  5. The accuracy of the out-of-field dose calculations using a model based algorithm in a commercial treatment planning system.

    PubMed

    Wang, Lilie; Ding, George X

    2014-07-01

    The out-of-field dose can be clinically important as it relates to the dose of the organ-at-risk, although the accuracy of its calculation in commercial radiotherapy treatment planning systems (TPSs) receives less attention. This study evaluates the uncertainties of out-of-field dose calculated with a model based dose calculation algorithm, anisotropic analytical algorithm (AAA), implemented in a commercial radiotherapy TPS, Varian Eclipse V10, by using Monte Carlo (MC) simulations, in which the entire accelerator head is modeled including the multi-leaf collimators. The MC calculated out-of-field doses were validated by experimental measurements. The dose calculations were performed in a water phantom as well as CT based patient geometries and both static and highly modulated intensity-modulated radiation therapy (IMRT) fields were evaluated. We compared the calculated out-of-field doses, defined as lower than 5% of the prescription dose, in four H&N cancer patients and two lung cancer patients treated with volumetric modulated arc therapy (VMAT) and IMRT techniques. The results show that the discrepancy of calculated out-of-field dose profiles between AAA and the MC depends on the depth and is generally less than 1% for in water phantom comparisons and in CT based patient dose calculations for static field and IMRT. In cases of VMAT plans, the difference between AAA and MC is <0.5%. The clinical impact resulting from the error on the calculated organ doses were analyzed by using dose-volume histograms. Although the AAA algorithm significantly underestimated the out-of-field doses, the clinical impact on the calculated organ doses in out-of-field regions may not be significant in practice due to very low out-of-field doses relative to the target dose. PMID:24925858

  6. The accuracy of the out-of-field dose calculations using a model based algorithm in a commercial treatment planning system

    NASA Astrophysics Data System (ADS)

    Wang, Lilie; Ding, George X.

    2014-07-01

    The out-of-field dose can be clinically important as it relates to the dose of the organ-at-risk, although the accuracy of its calculation in commercial radiotherapy treatment planning systems (TPSs) receives less attention. This study evaluates the uncertainties of out-of-field dose calculated with a model based dose calculation algorithm, anisotropic analytical algorithm (AAA), implemented in a commercial radiotherapy TPS, Varian Eclipse V10, by using Monte Carlo (MC) simulations, in which the entire accelerator head is modeled including the multi-leaf collimators. The MC calculated out-of-field doses were validated by experimental measurements. The dose calculations were performed in a water phantom as well as CT based patient geometries and both static and highly modulated intensity-modulated radiation therapy (IMRT) fields were evaluated. We compared the calculated out-of-field doses, defined as lower than 5% of the prescription dose, in four H&N cancer patients and two lung cancer patients treated with volumetric modulated arc therapy (VMAT) and IMRT techniques. The results show that the discrepancy of calculated out-of-field dose profiles between AAA and the MC depends on the depth and is generally less than 1% for in water phantom comparisons and in CT based patient dose calculations for static field and IMRT. In cases of VMAT plans, the difference between AAA and MC is <0.5%. The clinical impact resulting from the error on the calculated organ doses were analyzed by using dose-volume histograms. Although the AAA algorithm significantly underestimated the out-of-field doses, the clinical impact on the calculated organ doses in out-of-field regions may not be significant in practice due to very low out-of-field doses relative to the target dose.

  7. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-09-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose

  8. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms.

    PubMed

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M; Asma, Evren; Kinahan, Paul E; De Man, Bruno

    2015-10-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  9. SU-E-T-481: Dosimetric Comparison of Acuros XB and Anisotropic Analytic Algorithm with Commercial Monte Carlo Based Dose Calculation Algorithm for Stereotactic Body Radiation Therapy of Lung Cancer

    SciTech Connect

    Cao, M; Tenn, S; Lee, C; Yang, Y; Lamb, J; Agazaryan, N; Lee, P; Low, D

    2014-06-01

    Purpose: To evaluate performance of three commercially available treatment planning systems for stereotactic body radiation therapy (SBRT) of lung cancer using the following algorithms: Boltzmann transport equation based algorithm (AcurosXB AXB), convolution based algorithm Anisotropic Analytic Algorithm (AAA); and Monte Carlo based algorithm (XVMC). Methods: A total of 10 patients with early stage non-small cell peripheral lung cancer were included. The initial clinical plans were generated using the XVMC based treatment planning system with a prescription of 54Gy in 3 fractions following RTOG0613 protocol. The plans were recalculated with the same beam parameters and monitor units using AAA and AXB algorithms. A calculation grid size of 2mm was used for all algorithms. The dose distribution, conformity, and dosimetric parameters for the targets and organs at risk (OAR) are compared between the algorithms. Results: The average PTV volume was 19.6mL (range 4.2–47.2mL). The volume of PTV covered by the prescribed dose (PTV-V100) were 93.97±2.00%, 95.07±2.07% and 95.10±2.97% for XVMC, AXB and AAA algorithms, respectively. There was no significant difference in high dose conformity index; however, XVMC predicted slightly higher values (p=0.04) for the ratio of 50% prescription isodose volume to PTV (R50%). The percentage volume of total lungs receiving dose >20Gy (LungV20Gy) were 4.03±2.26%, 3.86±2.22% and 3.85±2.21% for XVMC, AXB and AAA algorithms. Examination of dose volume histograms (DVH) revealed small differences in targets and OARs for most patients. However, the AAA algorithm was found to predict considerable higher PTV coverage compared with AXB and XVMC algorithms in two cases. The dose difference was found to be primarily located at the periphery region of the target. Conclusion: For clinical SBRT lung treatment planning, the dosimetric differences between three commercially available algorithms are generally small except at target periphery. XVMC

  10. Performance evaluation of iterative reconstruction algorithms for achieving CT radiation dose reduction - a phantom study.

    PubMed

    Dodge, Cristina T; Tamm, Eric P; Cody, Dianna D; Liu, Xinming; Jensen, Corey T; Wei, Wei; Kundra, Vikas; Rong, John

    2016-01-01

    The purpose of this study was to characterize image quality and dose performance with GE CT iterative reconstruction techniques, adaptive statistical iterative recon-struction (ASiR), and model-based iterative reconstruction (MBIR), over a range of typical to low-dose intervals using the Catphan 600 and the anthropomorphic Kyoto Kagaku abdomen phantoms. The scope of the project was to quantitatively describe the advantages and limitations of these approaches. The Catphan 600 phantom, supplemented with a fat-equivalent oval ring, was scanned using a GE Discovery HD750 scanner at 120 kVp, 0.8 s rotation time, and pitch factors of 0.516, 0.984, and 1.375. The mA was selected for each pitch factor to achieve CTDIvol values of 24, 18, 12, 6, 3, 2, and 1 mGy. Images were reconstructed at 2.5 mm thickness with filtered back-projection (FBP); 20%, 40%, and 70% ASiR; and MBIR. The potential for dose reduction and low-contrast detectability were evaluated from noise and contrast-to-noise ratio (CNR) measurements in the CTP 404 module of the Catphan. Hounsfield units (HUs) of several materials were evaluated from the cylinder inserts in the CTP 404 module, and the modulation transfer function (MTF) was calculated from the air insert. The results were con-firmed in the anthropomorphic Kyoto Kagaku abdomen phantom at 6, 3, 2, and 1mGy. MBIR reduced noise levels five-fold and increased CNR by a factor of five compared to FBP below 6mGy CTDIvol, resulting in a substantial improvement in image quality. Compared to ASiR and FBP, HU in images reconstructed with MBIR were consistently lower, and this discrepancy was reversed by higher pitch factors in some materials. MBIR improved the conspicuity of the high-contrast spatial resolution bar pattern, and MTF quantification confirmed the superior spatial resolution performance of MBIR versus FBP and ASiR at higher dose levels. While ASiR and FBP were relatively insensitive to changes in dose and pitch, the spatial resolution for MBIR

  11. Design and development of a new micro-beam treatment planning system: effectiveness of algorithms of optimization and dose calculations and potential of micro-beam treatment.

    PubMed

    Tachibana, Hidenobu; Kojima, Hiroyuki; Yusa, Noritaka; Miyajima, Satoshi; Tsuda, Akihisa; Yamashita, Takashi

    2012-07-01

    A new treatment planning system (TPS) was designed and developed for a new treatment system, which consisted of a micro-beam-enabled linac with robotics and a real-time tracking system. We also evaluated the effectiveness of the implemented algorithms of optimization and dose calculations in the TPS for the new treatment system. In the TPS, the optimization procedure consisted of the pseudo Beam's-Eye-View method for finding the optimized beam directions and the steepest-descent method for determination of beam intensities. We used the superposition-/convolution-based (SC-based) algorithm and Monte Carlo-based (MC-based) algorithm to calculate dose distributions using CT image data sets. In the SC-based algorithm, dose density scaling was applied for the calculation of inhomogeneous corrections. The MC-based algorithm was implemented with Geant4 toolkit and a phase-based approach using a network-parallel computing. From the evaluation of the TPS, the system can optimize the direction and intensity of individual beams. The accuracy of the dose calculated by the SC-based algorithm was less than 1% on average with the calculation time of 15 s for one beam. However, the MC-based algorithm needed 72 min for one beam using the phase-based approach, even though the MC-based algorithm with the parallel computing could decrease multiple beam calculations and had 18.4 times faster calculation speed using the parallel computing. The SC-based algorithm could be practically acceptable for the dose calculation in terms of the accuracy and computation time. Additionally, we have found a dosimetric advantage of proton Bragg peak-like dose distribution in micro-beam treatment. PMID:22544809

  12. TU-A-12A-07: CT-Based Biomarkers to Characterize Lung Lesion: Effects of CT Dose, Slice Thickness and Reconstruction Algorithm Based Upon a Phantom Study

    SciTech Connect

    Zhao, B; Tan, Y; Tsai, W; Lu, L; Schwartz, L; So, J; Goldman, J; Lu, Z

    2014-06-15

    Purpose: Radiogenomics promises the ability to study cancer tumor genotype from the phenotype obtained through radiographic imaging. However, little attention has been paid to the sensitivity of image features, the image-based biomarkers, to imaging acquisition techniques. This study explores the impact of CT dose, slice thickness and reconstruction algorithm on measuring image features using a thorax phantom. Methods: Twentyfour phantom lesions of known volume (1 and 2mm), shape (spherical, elliptical, lobular and spicular) and density (-630, -10 and +100 HU) were scanned on a GE VCT at four doses (25, 50, 100, and 200 mAs). For each scan, six image series were reconstructed at three slice thicknesses of 5, 2.5 and 1.25mm with continuous intervals, using the lung and standard reconstruction algorithms. The lesions were segmented with an in-house 3D algorithm. Fifty (50) image features representing lesion size, shape, edge, and density distribution/texture were computed. Regression method was employed to analyze the effect of CT dose, slice of thickness and reconstruction algorithm on these features adjusting 3 confounding factors (size, density and shape of phantom lesions). Results: The coefficients of CT dose, slice thickness and reconstruction algorithm are presented in Table 1 in the supplementary material. No significant difference was found between the image features calculated on low dose CT scans (25mAs and 50mAs). About 50% texture features were found statistically different between low doses and high doses (100 and 200mAs). Significant differences were found for almost all features when calculated on 1.25mm, 2.5mm, and 5mm slice thickness images. Reconstruction algorithms significantly affected all density-based image features, but not morphological features. Conclusions: There is a great need to standardize the CT imaging protocols for radiogenomics study because CT dose, slice thickness and reconstruction algorithm impact quantitative image features to

  13. A dose calculation algorithm with correction for proton-nucleus interactions in non-water materials for proton radiotherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Inaniwa, T.; Kanematsu, N.; Sato, S.; Kohno, R.

    2016-01-01

    In treatment planning for proton radiotherapy, the dose measured in water is applied to the patient dose calculation with density scaling by stopping power ratio {ρ\\text{S}} . Since the body tissues are chemically different from water, this approximation may cause dose calculation errors, especially due to differences in nuclear interactions. We proposed and validated an algorithm for correcting these errors. The dose in water is decomposed into three constituents according to the physical interactions of protons in water: the dose from primary protons continuously slowing down by electromagnetic interactions, the dose from protons scattered by elastic and/or inelastic interactions, and the dose resulting from nonelastic interactions. The proportions of the three dose constituents differ between body tissues and water. We determine correction factors for the proportion of dose constituents with Monte Carlo simulations in various standard body tissues, and formulated them as functions of their {ρ\\text{S}} for patient dose calculation. The influence of nuclear interactions on dose was assessed by comparing the Monte Carlo simulated dose and the uncorrected dose in common phantom materials. The influence around the Bragg peak amounted to  -6% for polytetrafluoroethylene and 0.3% for polyethylene. The validity of the correction method was confirmed by comparing the simulated and corrected doses in the materials. The deviation was below 0.8% for all materials. The accuracy of the correction factors derived with Monte Carlo simulations was separately verified through irradiation experiments with a 235 MeV proton beam using common phantom materials. The corrected doses agreed with the measurements within 0.4% for all materials except graphite. The influence on tumor dose was assessed in a prostate case. The dose reduction in the tumor was below 0.5%. Our results verify that this algorithm is practical and accurate for proton radiotherapy treatment planning, and

  14. A dose calculation algorithm with correction for proton-nucleus interactions in non-water materials for proton radiotherapy treatment planning.

    PubMed

    Inaniwa, T; Kanematsu, N; Sato, S; Kohno, R

    2016-01-01

    In treatment planning for proton radiotherapy, the dose measured in water is applied to the patient dose calculation with density scaling by stopping power ratio [Formula: see text]. Since the body tissues are chemically different from water, this approximation may cause dose calculation errors, especially due to differences in nuclear interactions. We proposed and validated an algorithm for correcting these errors. The dose in water is decomposed into three constituents according to the physical interactions of protons in water: the dose from primary protons continuously slowing down by electromagnetic interactions, the dose from protons scattered by elastic and/or inelastic interactions, and the dose resulting from nonelastic interactions. The proportions of the three dose constituents differ between body tissues and water. We determine correction factors for the proportion of dose constituents with Monte Carlo simulations in various standard body tissues, and formulated them as functions of their [Formula: see text] for patient dose calculation. The influence of nuclear interactions on dose was assessed by comparing the Monte Carlo simulated dose and the uncorrected dose in common phantom materials. The influence around the Bragg peak amounted to  -6% for polytetrafluoroethylene and 0.3% for polyethylene. The validity of the correction method was confirmed by comparing the simulated and corrected doses in the materials. The deviation was below 0.8% for all materials. The accuracy of the correction factors derived with Monte Carlo simulations was separately verified through irradiation experiments with a 235 MeV proton beam using common phantom materials. The corrected doses agreed with the measurements within 0.4% for all materials except graphite. The influence on tumor dose was assessed in a prostate case. The dose reduction in the tumor was below 0.5%. Our results verify that this algorithm is practical and accurate for proton radiotherapy treatment

  15. SU-E-T-356: Accuracy of Eclipse Electron Macro Monte Carlo Dose Algorithm for Use in Bolus Electron Conformal Therapy

    SciTech Connect

    Carver, R; Popple, R; Benhabib, S; Antolak, J; Sprunger, C; Hogstrom, K

    2014-06-01

    Purpose: To evaluate the accuracy of electron dose distribution calculated by the Varian Eclipse electron Monte Carlo (eMC) algorithm for use with recent commercially available bolus electron conformal therapy (ECT). Methods: eMC-calculated electron dose distributions for bolus ECT have been compared to those previously measured for cylindrical phantoms (retromolar trigone and nose), whose axial cross sections were based on the mid-PTV CT anatomy for each site. The phantoms consisted of SR4 muscle substitute, SR4 bone substitute, and air. The bolus ECT treatment plans were imported into the Eclipse treatment planning system and calculated using the maximum allowable histories (2×10{sup 9}), resulting in a statistical error of <0.2%. Smoothing was not used for these calculations. Differences between eMC-calculated and measured dose distributions were evaluated in terms of absolute dose difference as well as distance to agreement (DTA). Results: Results from the eMC for the retromolar trigone phantom showed 89% (41/46) of dose points within 3% dose difference or 3 mm DTA. There was an average dose difference of −0.12% with a standard deviation of 2.56%. Results for the nose phantom showed 95% (54/57) of dose points within 3% dose difference or 3 mm DTA. There was an average dose difference of 1.12% with a standard deviation of 3.03%. Dose calculation times for the retromolar trigone and nose treatment plans were 15 min and 22 min, respectively, using 16 processors (Intel Xeon E5-2690, 2.9 GHz) on a Varian Eclipse framework agent server (FAS). Results of this study were consistent with those previously reported for accuracy of the eMC electron dose algorithm and for the .decimal, Inc. pencil beam redefinition algorithm used to plan the bolus. Conclusion: These results show that the accuracy of the Eclipse eMC algorithm is suitable for clinical implementation of bolus ECT.

  16. TH-E-BRE-11: Adaptive-Beamlet Based Finite Size Pencil Beam (AB-FSPB) Dose Calculation Algorithm for Independent Verification of IMRT and VMAT

    SciTech Connect

    Park, C; Arhjoul, L; Yan, G; Lu, B; Li, J; Liu, C

    2014-06-15

    Purpose: In current IMRT and VMAT settings, the use of sophisticated dose calculation procedure is inevitable in order to account complex treatment field created by MLCs. As a consequence, independent volumetric dose verification procedure is time consuming which affect the efficiency of clinical workflow. In this study, the authors present an efficient Pencil Beam based dose calculation algorithm that minimizes the computational procedure while preserving the accuracy. Methods: The computational time of Finite Size Pencil Beam (FSPB) algorithm is proportional to the number of infinitesimal identical beamlets that constitute the arbitrary field shape. In AB-FSPB, the dose distribution from each beamlet is mathematically modelled such that the sizes of beamlets to represent arbitrary field shape are no longer needed to be infinitesimal nor identical. In consequence, it is possible to represent arbitrary field shape with combinations of different sized and minimal number of beamlets. Results: On comparing FSPB with AB-FSPB, the complexity of the algorithm has been reduced significantly. For 25 by 25 cm2 squared shaped field, 1 beamlet of 25 by 25 cm2 was sufficient to calculate dose in AB-FSPB, whereas in conventional FSPB, minimum 2500 beamlets of 0.5 by 0.5 cm2 size were needed to calculate dose that was comparable to the Result computed from Treatment Planning System (TPS). The algorithm was also found to be GPU compatible to maximize its computational speed. On calculating 3D dose of IMRT (∼30 control points) and VMAT plan (∼90 control points) with grid size 2.0 mm (200 by 200 by 200), the dose could be computed within 3∼5 and 10∼15 seconds. Conclusion: Authors have developed an efficient Pencil Beam type dose calculation algorithm called AB-FSPB. The fast computation nature along with GPU compatibility has shown performance better than conventional FSPB. This completely enables the implantation of AB-FSPB in the clinical environment for independent

  17. An algorithm to evaluate solar irradiance and effective dose rates using spectral UV irradiance at four selected wavelengths.

    PubMed

    Anav, A; Rafanelli, C; Di Menno, I; Di Menno, M

    2004-01-01

    The paper shows a semi-analytical method for environmental and dosimetric applications to evaluate, in clear sky conditions, the solar irradiance and the effective dose rates for some action spectra using only four spectral irradiance values at selected wavelengths in the UV-B and UV-A regions (305, 320, 340 and 380 nm). The method, named WL4UV, is based on the reconstruction of an approximated spectral irradiance that can be integrated, to obtain the solar irradiance, or convoluted with an action spectrum to obtain an effective dose rate. The parameters required in the algorithm are deduced from archived solar spectral irradiance data. This database contains measurements carried out by some Brewer spectrophotometers located in various geographical positions, at similar altitudes, with very different environmental characteristics: Rome (Italy), Ny Alesund (Svalbard Islands, Norway) and Ushuaia (Tierra del Fuego, Argentina). To evaluate the precision of the method, a double test was performed with data not used in developing the model. Archived Brewer measurement data, in clear sky conditions, from Rome and from the National Science Foundation UV data set in San Diego (CA, USA) and Ushuaia, where SUV 100 spectroradiometers operate, were drawn randomly. The comparison of measured and computed irradiance has a relative deviation of about +/-2%. The effective dose rates for action spectra of Erythema, DNA and non-Melanoma skin cancer have a relative deviation of less than approximately 20% for solar zenith angles <50 degrees . PMID:15266087

  18. Calculation algorithm for determination of dose versus LET using recombination method

    NASA Astrophysics Data System (ADS)

    Dobrzyńska, Magdalena

    2015-09-01

    Biological effectiveness of any type of radiation can be related to absorbed dose versus linear energy transfer (LET) associated with the particular radiation field. In complex radiation fields containing neutrons, especially in fields of high-energy particles or in stray radiation fields, radiation quality factor can be determined using detectors which response depends on LET. Recombination chambers, which are high-pressure, tissue equivalent ionization chambers operating under conditions of initial recombination of ions form a class of such detectors. Recombination Microdosimetric Method (RMM) is based on analysis of the shape of current-voltage characteristic (saturation curve) of recombination chamber. The ion collection process in the chamber is described by theoretical formula that contains a number of coefficients which depend on LET. The coefficients are calculated by fitting the shape of the theoretical curve to the experimental data. The purpose of the present project was to develop such a program for determination of radiation quality factor, basing on calculation of dose distribution versus LET using RMM.

  19. Quantitative assessment of the accuracy of dose calculation using pencil beam and Monte Carlo algorithms and requirements for clinical quality assurance

    SciTech Connect

    Ali, Imad; Ahmad, Salahuddin

    2013-10-01

    To compare the doses calculated using the BrainLAB pencil beam (PB) and Monte Carlo (MC) algorithms for tumors located in various sites including the lung and evaluate quality assurance procedures required for the verification of the accuracy of dose calculation. The dose-calculation accuracy of PB and MC was also assessed quantitatively with measurement using ionization chamber and Gafchromic films placed in solid water and heterogeneous phantoms. The dose was calculated using PB convolution and MC algorithms in the iPlan treatment planning system from BrainLAB. The dose calculation was performed on the patient's computed tomography images with lesions in various treatment sites including 5 lungs, 5 prostates, 4 brains, 2 head and necks, and 2 paraspinal tissues. A combination of conventional, conformal, and intensity-modulated radiation therapy plans was used in dose calculation. The leaf sequence from intensity-modulated radiation therapy plans or beam shapes from conformal plans and monitor units and other planning parameters calculated by the PB were identical for calculating dose with MC. Heterogeneity correction was considered in both PB and MC dose calculations. Dose-volume parameters such as V95 (volume covered by 95% of prescription dose), dose distributions, and gamma analysis were used to evaluate the calculated dose by PB and MC. The measured doses by ionization chamber and EBT GAFCHROMIC film in solid water and heterogeneous phantoms were used to quantitatively asses the accuracy of dose calculated by PB and MC. The dose-volume histograms and dose distributions calculated by PB and MC in the brain, prostate, paraspinal, and head and neck were in good agreement with one another (within 5%) and provided acceptable planning target volume coverage. However, dose distributions of the patients with lung cancer had large discrepancies. For a plan optimized with PB, the dose coverage was shown as clinically acceptable, whereas in reality, the MC showed a

  20. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    SciTech Connect

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-05-15

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the

  1. High-density dental implants and radiotherapy planning: evaluation of effects on dose distribution using pencil beam convolution algorithm and Monte Carlo method.

    PubMed

    Çatli, Serap

    2015-01-01

    High atomic number and density of dental implants leads to major problems at providing an accurate dose distribution in radiotherapy and contouring tumors and organs caused by the artifact in head and neck tumors. The limits and deficiencies of the algorithms using in the treatment planning systems can lead to large errors in dose calculation, and this may adversely affect the patient's treatment. In the present study, four commercial dental implants were used: pure titanium, titanium alloy (Ti-6Al-4V), amalgam, and crown. The effects of dental implants on dose distribution are determined with two methods: pencil beam convolution (PBC) algorithm and Monte Carlo code for 6 MV photon beam. The central axis depth doses were calculated on the phantom for a source-skin distance (SSD) of 100 cm and a 10 × 10 cm2 field using both of algorithms. The results of Monte Carlo method and Eclipse TPS were compared to each other and to those previously reported. In the present study, dose increases in tissue at a distance of 2 mm in front of the dental implants were seen due to the backscatter of electrons for dental implants at 6 MV using the Monte Carlo method. The Eclipse treatment planning system (TPS) couldn't precisely account for the backscatter radiation caused by the dental prostheses. TPS underestimated the back scatter dose and overestimated the dose after the dental implants. The large errors found for TPS in this study are due to the limits and deficiencies of the algorithms. The accuracy of the PBC algorithm of Eclipse TPS was evaluated in comparison to Monte Carlo calculations in consideration of the recommendations of the American Association of Physicists in Medicine Radiation Therapy Committee Task Group 65. From the comparisons of the TPS and Monte Carlo calculations, it is verified that the Monte Carlo simulation is a good approach to derive the dose distribution in heterogeneous media. PMID:26699323

  2. Verification measurements and clinical evaluation of the iPlan RT Monte Carlo dose algorithm for 6 MV photon energy

    NASA Astrophysics Data System (ADS)

    Petoukhova, A. L.; van Wingerden, K.; Wiggenraad, R. G. J.; van de Vaart, P. J. M.; van Egmond, J.; Franken, E. M.; van Santvoort, J. P. C.

    2010-08-01

    This study presents data for verification of the iPlan RT Monte Carlo (MC) dose algorithm (BrainLAB, Feldkirchen, Germany). MC calculations were compared with pencil beam (PB) calculations and verification measurements in phantoms with lung-equivalent material, air cavities or bone-equivalent material to mimic head and neck and thorax and in an Alderson anthropomorphic phantom. Dosimetric accuracy of MC for the micro-multileaf collimator (MLC) simulation was tested in a homogeneous phantom. All measurements were performed using an ionization chamber and Kodak EDR2 films with Novalis 6 MV photon beams. Dose distributions measured with film and calculated with MC in the homogeneous phantom are in excellent agreement for oval, C and squiggle-shaped fields and for a clinical IMRT plan. For a field with completely closed MLC, MC is much closer to the experimental result than the PB calculations. For fields larger than the dimensions of the inhomogeneities the MC calculations show excellent agreement (within 3%/1 mm) with the experimental data. MC calculations in the anthropomorphic phantom show good agreement with measurements for conformal beam plans and reasonable agreement for dynamic conformal arc and IMRT plans. For 6 head and neck and 15 lung patients a comparison of the MC plan with the PB plan was performed. Our results demonstrate that MC is able to accurately predict the dose in the presence of inhomogeneities typical for head and neck and thorax regions with reasonable calculation times (5-20 min). Lateral electron transport was well reproduced in MC calculations. We are planning to implement MC calculations for head and neck and lung cancer patients.

  3. Influence of model based iterative reconstruction algorithm on image quality of multiplanar reformations in reduced dose chest CT

    PubMed Central

    Dunet, Vincent; Hachulla, Anne-Lise; Grimm, Jochen; Beigelman-Aubry, Catherine

    2016-01-01

    Background Model-based iterative reconstruction (MBIR) reduces image noise and improves image quality (IQ) but its influence on post-processing tools including maximal intensity projection (MIP) and minimal intensity projection (mIP) remains unknown. Purpose To evaluate the influence on IQ of MBIR on native, mIP, MIP axial and coronal reformats of reduced dose computed tomography (RD-CT) chest acquisition. Material and Methods Raw data of 50 patients, who underwent a standard dose CT (SD-CT) and a follow-up RD-CT with a CT dose index (CTDI) of 2–3 mGy, were reconstructed by MBIR and FBP. Native slices, 4-mm-thick MIP, and 3-mm-thick mIP axial and coronal reformats were generated. The relative IQ, subjective IQ, image noise, and number of artifacts were determined in order to compare different reconstructions of RD-CT with reference SD-CT. Results The lowest noise was observed with MBIR. RD-CT reconstructed by MBIR exhibited the best relative and subjective IQ on coronal view regardless of the post-processing tool. MBIR generated the lowest rate of artefacts on coronal mIP/MIP reformats and the highest one on axial reformats, mainly represented by distortions and stairsteps artifacts. Conclusion The MBIR algorithm reduces image noise but generates more artifacts than FBP on axial mIP and MIP reformats of RD-CT. Conversely, it significantly improves IQ on coronal views, without increasing artifacts, regardless of the post-processing technique.

  4. Percentage depth dose calculation accuracy of model based algorithms in high energy photon small fields through heterogeneous media and comparison with plastic scintillator dosimetry.

    PubMed

    Alagar, Ananda Giri Babu; Kadirampatti Mani, Ganesh; Karunakaran, Kaviarasu

    2016-01-01

    Small fields smaller than 4 × 4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model-based algorithms, X-ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS-Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth-of-dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth-dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1 × 1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1 × 1 cm2 field showed maximum deviation, except in 6MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower-density materials compared to high-density materials. PMID:26894345

  5. Whole-body CT-based imaging algorithm for multiple trauma patients: radiation dose and time to diagnosis

    PubMed Central

    Gordic, S; Hodel, S; Simmen, H-P; Brueesch, M; Frauenfelder, T; Wanner, G; Sprengel, K

    2015-01-01

    Objective: To determine the number of imaging examinations, radiation dose and the time to complete trauma-related imaging in multiple trauma patients before and after introduction of whole-body CT (WBCT) into early trauma care. Methods: 120 consecutive patients before and 120 patients after introduction of WBCT into the trauma algorithm of the University Hospital Zurich were compared regarding the number and type of CT, radiography, focused assessment with sonography for trauma (FAST), additional CT examinations (defined as CT of the same body regions after radiography and/or FAST) and the time to complete trauma-related imaging. Results: In the WBCT cohort, significantly more patients underwent CT of the head, neck, chest and abdomen (p < 0.001) than in the non-WBCT cohort, whereas the number of radiographic examinations of the cervical spine, chest and pelvis and of FAST examinations were significantly lower (p < 0.001). There were no significant differences between cohorts regarding the number of radiographic examinations of the upper (p = 0.56) and lower extremities (p = 0.30). We found significantly higher effective doses in the WBCT (29.5 mSv) than in the non-WBCT cohort (15.9 mSv; p < 0.001), but fewer additional CT examinations for completing the work-up were needed in the WBCT cohort (p < 0.001). The time to complete trauma-related imaging was significantly shorter in the WBCT (12 min) than in the non-WBCT cohort (75 min; p < 0.001). Conclusion: Including WBCT in the initial work-up of trauma patients results in higher radiation doses, but fewer additional CT examinations are needed, and the time for completing trauma-related imaging is shorter. Advances in knowledge: WBCT in trauma patients is associated with a high radiation dose of 29.5 mSv. PMID:25594105

  6. SU-E-T-626: Accuracy of Dose Calculation Algorithms in MultiPlan Treatment Planning System in Presence of Heterogeneities

    SciTech Connect

    Moignier, C; Huet, C; Barraux, V; Loiseau, C; Sebe-Mercier, K; Batalla, A; Makovicka, L

    2014-06-15

    Purpose: Advanced stereotactic radiotherapy (SRT) treatments require accurate dose calculation for treatment planning especially for treatment sites involving heterogeneous patient anatomy. The purpose of this study was to evaluate the accuracy of dose calculation algorithms, Raytracing and Monte Carlo (MC), implemented in the MultiPlan treatment planning system (TPS) in presence of heterogeneities. Methods: First, the LINAC of a CyberKnife radiotherapy facility was modeled with the PENELOPE MC code. A protocol for the measurement of dose distributions with EBT3 films was established and validated thanks to comparison between experimental dose distributions and calculated dose distributions obtained with MultiPlan Raytracing and MC algorithms as well as with the PENELOPE MC model for treatments planned with the homogenous Easycube phantom. Finally, bones and lungs inserts were used to set up a heterogeneous Easycube phantom. Treatment plans with the 10, 7.5 or the 5 mm field sizes were generated in Multiplan TPS with different tumor localizations (in the lung and at the lung/bone/soft tissue interface). Experimental dose distributions were compared to the PENELOPE MC and Multiplan calculations using the gamma index method. Results: Regarding the experiment in the homogenous phantom, 100% of the points passed for the 3%/3mm tolerance criteria. These criteria include the global error of the method (CT-scan resolution, EBT3 dosimetry, LINAC positionning …), and were used afterwards to estimate the accuracy of the MultiPlan algorithms in heterogeneous media. Comparison of the dose distributions obtained in the heterogeneous phantom is in progress. Conclusion: This work has led to the development of numerical and experimental dosimetric tools for small beam dosimetry. Raytracing and MC algorithms implemented in MultiPlan TPS were evaluated in heterogeneous media.

  7. Development of an algorithm for evaluating personal doses due to photon fields in terms of operational quantities for TLD badge system in India.

    PubMed

    Pradhan, S M; Sneha, C; Chourasiya, G; Adtani, M M; Tripathi, S M; Singh, S K

    2009-09-01

    In order to evaluate and report the personal doses in terms of personal dose equivalent, the performance of the CaSO(4):Dy based thermoluminescence dosemeter (TLD) badge used for countrywide personnel monitoring in India is investigated using monoenergetic and narrow spectrum radiation qualities equivalent to those given in ISO standards. Algorithms suitable for evaluating H(p)(10) and H(p)(0.07) within +/- 30 % are developed from the responses of dosemeter elements/discs under different filters for normal as well as angular irradiation conditions using these beams. The algorithm is tested for TLD badges irradiated to mixtures of low- and high-energy ((137)Cs) beams in various proportions. The paper concludes with the results of test of algorithm by evaluation of badges used in the IAEA/RCA intercomparison studies and discussion on inherent limitations. PMID:19755432

  8. Incorporating an Exercise Detection, Grading, and Hormone Dosing Algorithm Into the Artificial Pancreas Using Accelerometry and Heart Rate.

    PubMed

    Jacobs, Peter G; Resalat, Navid; El Youssef, Joseph; Reddy, Ravi; Branigan, Deborah; Preiser, Nicholas; Condon, John; Castle, Jessica

    2015-11-01

    In this article, we present several important contributions necessary for enabling an artificial endocrine pancreas (AP) system to better respond to exercise events. First, we show how exercise can be automatically detected using body-worn accelerometer and heart rate sensors. During a 22 hour overnight inpatient study, 13 subjects with type 1 diabetes wearing a Zephyr accelerometer and heart rate monitor underwent 45 minutes of mild aerobic treadmill exercise while controlling their glucose levels using sensor-augmented pump therapy. We used the accelerometer and heart rate as inputs into a validated regression model. Using this model, we were able to detect the exercise event with a sensitivity of 97.2% and a specificity of 99.5%. Second, from this same study, we show how patients' glucose declined during the exercise event and we present results from in silico modeling that demonstrate how including an exercise model in the glucoregulatory model improves the estimation of the drop in glucose during exercise. Last, we present an exercise dosing adjustment algorithm and describe parameter tuning and performance using an in silico glucoregulatory model during an exercise event. PMID:26438720

  9. SU-E-T-520: Four-Dimensional Dose Calculation Algorithm Considering Variations in Dose Distribution Induced by Sinusoidal One-Dimensional Motion Patterns

    SciTech Connect

    Taguenang, J; Algan, O; Ahmad, S; Ali, I

    2014-06-01

    Purpose: To investigate quantitatively the variations in dose-distributions induced by motion by measurements and modeling. A four-dimensional (4D) motion model of dose distributions that accounts for different motion parameters was developed. Methods: Variations in dose distributions induced by sinusoidal phantom motion were measured using a multiple-diode-array-detector (MapCheck2). MapCheck2 was mounted on a mobile platform that moves with adjustable calibrated motion patterns in the superior-inferior direction. Various plans including open and intensity-modulated fields were used to irradiate MapCheck2. A motion model was developed to predict spatial and temporal variations in the dose-distributions and dependence on the motion parameters using pencil-beam spread-out superposition function. This model used the superposition of pencil-beams weighted with a probability function extracted from the motion trajectory. The model was verified with measured dose-distributions obtained from MapCheck2. Results: Dose-distribution varied considerably with motion where in the regions between isocenter and 50% isodose-line, dose decreased with increase of the motion amplitude. Dose levels increased with increase in the motion amplitude in the region beyond 50% isodose-line. When the range of motion (ROM=twice amplitude) was smaller than the field length both central axis dose and the 50% isodose-line did not change with variation of motion amplitude and remained equal to the dose of stationary phantom. As ROM became larger than the field length, the dose level decreased at central axis dose and 50% isodose-line. Motion frequency and phase did not affect the dose distributions which were delivered over an extended time longer than few motion cycles, however, they played an important role for doses delivered with high-dose-rates within one motion cycle . Conclusion: A 4D-dose motion model was developed to predict and correct variations in dose distributions induced by one

  10. Commissioning and initial acceptance tests for a commercial convolution dose calculation algorithm for radiotherapy treatment planning in comparison with Monte Carlo simulation and measurement

    PubMed Central

    Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen

    2012-01-01

    In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081

  11. Commissioning and initial acceptance tests for a commercial convolution dose calculation algorithm for radiotherapy treatment planning in comparison with Monte Carlo simulation and measurement.

    PubMed

    Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen

    2012-07-01

    In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081

  12. Experimental study on the application of a compressed-sensing (CS) algorithm to dental cone-beam CT (CBCT) for accurate, low-dose image reconstruction

    NASA Astrophysics Data System (ADS)

    Oh, Jieun; Cho, Hyosung; Je, Uikyu; Lee, Minsik; Kim, Hyojeong; Hong, Daeki; Park, Yeonok; Lee, Seonhwa; Cho, Heemoon; Choi, Sungil; Koo, Yangseo

    2013-03-01

    In practical applications of three-dimensional (3D) tomographic imaging, there are often challenges for image reconstruction from insufficient data. In computed tomography (CT); for example, image reconstruction from few views would enable fast scanning with reduced doses to the patient. In this study, we investigated and implemented an efficient reconstruction method based on a compressed-sensing (CS) algorithm, which exploits the sparseness of the gradient image with substantially high accuracy, for accurate, low-dose dental cone-beam CT (CBCT) reconstruction. We applied the algorithm to a commercially-available dental CBCT system (Expert7™, Vatech Co., Korea) and performed experimental works to demonstrate the algorithm for image reconstruction in insufficient sampling problems. We successfully reconstructed CBCT images from several undersampled data and evaluated the reconstruction quality in terms of the universal-quality index (UQI). Experimental demonstrations of the CS-based reconstruction algorithm appear to show that it can be applied to current dental CBCT systems for reducing imaging doses and improving the image quality.

  13. SU-E-I-05: A Correction Algorithm for Kilovoltage Cone-Beam Computed Tomography Dose Calculations in Cervical Cancer Patients

    SciTech Connect

    Zhang, J; Zhang, W; Lu, J

    2015-06-15

    Purpose: To investigate the accuracy and feasibility of dose calculations using kilovoltage cone beam computed tomography in cervical cancer radiotherapy using a correction algorithm. Methods: The Hounsfield units (HU) and electron density (HU-density) curve was obtained for both planning CT (pCT) and kilovoltage cone beam CT (CBCT) using a CIRS-062 calibration phantom. The pCT and kV-CBCT images have different HU values, and if the HU-density curve of CBCT was directly used to calculate dose in CBCT images may have a deviation on dose distribution. It is necessary to normalize the different HU values between pCT and CBCT. A HU correction algorithm was used for CBCT images (cCBCT). Fifteen intensity-modulated radiation therapy (IMRT) plans of cervical cancer were chosen, and the plans were transferred to the pCT and cCBCT data sets without any changes for dose calculations. Phantom and patient studies were carried out. The dose differences and dose distributions were compared between cCBCT plan and pCT plan. Results: The HU number of CBCT was measured by several times, and the maximum change was less than 2%. To compare with pCT, the CBCT and cCBCT has a discrepancy, the dose differences in CBCT and cCBCT images were 2.48%±0.65% (range: 1.3%∼3.8%) and 0.48%±0.21% (range: 0.1%∼0.82%) for phantom study, respectively. For dose calculation in patient images, the dose differences were 2.25%±0.43% (range: 1.4%∼3.4%) and 0.63%±0.35% (range: 0.13%∼0.97%), respectively. And for the dose distributions, the passing rate of cCBCT was higher than the CBCTs. Conclusion: The CBCT image for dose calculation is feasible in cervical cancer radiotherapy, and the correction algorithm offers acceptable accuracy. It will become a useful tool for adaptive radiation therapy.

  14. Dosimetric verification and clinical evaluation of a new commercially available Monte Carlo-based dose algorithm for application in stereotactic body radiation therapy (SBRT) treatment planning

    NASA Astrophysics Data System (ADS)

    Fragoso, Margarida; Wen, Ning; Kumar, Sanath; Liu, Dezhi; Ryu, Samuel; Movsas, Benjamin; Munther, Ajlouni; Chetty, Indrin J.

    2010-08-01

    Modern cancer treatment techniques, such as intensity-modulated radiation therapy (IMRT) and stereotactic body radiation therapy (SBRT), have greatly increased the demand for more accurate treatment planning (structure definition, dose calculation, etc) and dose delivery. The ability to use fast and accurate Monte Carlo (MC)-based dose calculations within a commercial treatment planning system (TPS) in the clinical setting is now becoming more of a reality. This study describes the dosimetric verification and initial clinical evaluation of a new commercial MC-based photon beam dose calculation algorithm, within the iPlan v.4.1 TPS (BrainLAB AG, Feldkirchen, Germany). Experimental verification of the MC photon beam model was performed with film and ionization chambers in water phantoms and in heterogeneous solid-water slabs containing bone and lung-equivalent materials for a 6 MV photon beam from a Novalis (BrainLAB) linear accelerator (linac) with a micro-multileaf collimator (m3 MLC). The agreement between calculated and measured dose distributions in the water phantom verification tests was, on average, within 2%/1 mm (high dose/high gradient) and was within ±4%/2 mm in the heterogeneous slab geometries. Example treatment plans in the lung show significant differences between the MC and one-dimensional pencil beam (PB) algorithms within iPlan, especially for small lesions in the lung, where electronic disequilibrium effects are emphasized. Other user-specific features in the iPlan system, such as options to select dose to water or dose to medium, and the mean variance level, have been investigated. Timing results for typical lung treatment plans show the total computation time (including that for processing and I/O) to be less than 10 min for 1-2% mean variance (running on a single PC with 8 Intel Xeon X5355 CPUs, 2.66 GHz). Overall, the iPlan MC algorithm is demonstrated to be an accurate and efficient dose algorithm, incorporating robust tools for MC

  15. Dosimetric verification and clinical evaluation of a new commercially available Monte Carlo-based dose algorithm for application in stereotactic body radiation therapy (SBRT) treatment planning.

    PubMed

    Fragoso, Margarida; Wen, Ning; Kumar, Sanath; Liu, Dezhi; Ryu, Samuel; Movsas, Benjamin; Munther, Ajlouni; Chetty, Indrin J

    2010-08-21

    Modern cancer treatment techniques, such as intensity-modulated radiation therapy (IMRT) and stereotactic body radiation therapy (SBRT), have greatly increased the demand for more accurate treatment planning (structure definition, dose calculation, etc) and dose delivery. The ability to use fast and accurate Monte Carlo (MC)-based dose calculations within a commercial treatment planning system (TPS) in the clinical setting is now becoming more of a reality. This study describes the dosimetric verification and initial clinical evaluation of a new commercial MC-based photon beam dose calculation algorithm, within the iPlan v.4.1 TPS (BrainLAB AG, Feldkirchen, Germany). Experimental verification of the MC photon beam model was performed with film and ionization chambers in water phantoms and in heterogeneous solid-water slabs containing bone and lung-equivalent materials for a 6 MV photon beam from a Novalis (BrainLAB) linear accelerator (linac) with a micro-multileaf collimator (m(3) MLC). The agreement between calculated and measured dose distributions in the water phantom verification tests was, on average, within 2%/1 mm (high dose/high gradient) and was within +/-4%/2 mm in the heterogeneous slab geometries. Example treatment plans in the lung show significant differences between the MC and one-dimensional pencil beam (PB) algorithms within iPlan, especially for small lesions in the lung, where electronic disequilibrium effects are emphasized. Other user-specific features in the iPlan system, such as options to select dose to water or dose to medium, and the mean variance level, have been investigated. Timing results for typical lung treatment plans show the total computation time (including that for processing and I/O) to be less than 10 min for 1-2% mean variance (running on a single PC with 8 Intel Xeon X5355 CPUs, 2.66 GHz). Overall, the iPlan MC algorithm is demonstrated to be an accurate and efficient dose algorithm, incorporating robust tools for MC

  16. SU-E-J-109: Evaluation of Deformable Accumulated Parotid Doses Using Different Registration Algorithms in Adaptive Head and Neck Radiotherapy

    SciTech Connect

    Xu, S; Liu, B

    2015-06-15

    Purpose: Three deformable image registration (DIR) algorithms are utilized to perform deformable dose accumulation for head and neck tomotherapy treatment, and the differences of the accumulated doses are evaluated. Methods: Daily MVCT data for 10 patients with pathologically proven nasopharyngeal cancers were analyzed. The data were acquired using tomotherapy (TomoTherapy, Accuray) at the PLA General Hospital. The prescription dose to the primary target was 70Gy in 33 fractions.Three DIR methods (B-spline, Diffeomorphic Demons and MIMvista) were used to propagate parotid structures from planning CTs to the daily CTs and accumulate fractionated dose on the planning CTs. The mean accumulated doses of parotids were quantitatively compared and the uncertainties of the propagated parotid contours were evaluated using Dice similarity index (DSI). Results: The planned mean dose of the ipsilateral parotids (32.42±3.13Gy) was slightly higher than those of the contralateral parotids (31.38±3.19Gy)in 10 patients. The difference between the accumulated mean doses of the ipsilateral parotids in the B-spline, Demons and MIMvista deformation algorithms (36.40±5.78Gy, 34.08±6.72Gy and 33.72±2.63Gy ) were statistically significant (B-spline vs Demons, P<0.0001, B-spline vs MIMvista, p =0.002). And The difference between those of the contralateral parotids in the B-spline, Demons and MIMvista deformation algorithms (34.08±4.82Gy, 32.42±4.80Gy and 33.92±4.65Gy ) were also significant (B-spline vs Demons, p =0.009, B-spline vs MIMvista, p =0.074). For the DSI analysis, the scores of B-spline, Demons and MIMvista DIRs were 0.90, 0.89 and 0.76. Conclusion: Shrinkage of parotid volumes results in the dose increase to the parotid glands in adaptive head and neck radiotherapy. The accumulated doses of parotids show significant difference using the different DIR algorithms between kVCT and MVCT. Therefore, the volume-based criterion (i.e. DSI) as a quantitative evaluation of

  17. Quantitative Features of Liver Lesions, Lung Nodules, and Renal Stones at Multi-Detector Row CT Examinations: Dependency on Radiation Dose and Reconstruction Algorithm.

    PubMed

    Solomon, Justin; Mileto, Achille; Nelson, Rendon C; Roy Choudhury, Kingshuk; Samei, Ehsan

    2016-04-01

    Purpose To determine if radiation dose and reconstruction algorithm affect the computer-based extraction and analysis of quantitative imaging features in lung nodules, liver lesions, and renal stones at multi-detector row computed tomography (CT). Materials and Methods Retrospective analysis of data from a prospective, multicenter, HIPAA-compliant, institutional review board-approved clinical trial was performed by extracting 23 quantitative imaging features (size, shape, attenuation, edge sharpness, pixel value distribution, and texture) of lesions on multi-detector row CT images of 20 adult patients (14 men, six women; mean age, 63 years; range, 38-72 years) referred for known or suspected focal liver lesions, lung nodules, or kidney stones. Data were acquired between September 2011 and April 2012. All multi-detector row CT scans were performed at two different radiation dose levels; images were reconstructed with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction (MBIR) algorithms. A linear mixed-effects model was used to assess the effect of radiation dose and reconstruction algorithm on extracted features. Results Among the 23 imaging features assessed, radiation dose had a significant effect on five, three, and four of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). Adaptive statistical iterative reconstruction had a significant effect on three, one, and one of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). MBIR reconstruction had a significant effect on nine, 11, and 15 of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). Of note, the measured size of lung nodules and renal stones with MBIR was significantly different than those for the other two algorithms (P < .002 for all comparisons). Although lesion texture was

  18. On the dosimetric behaviour of photon dose calculation algorithms in the presence of simple geometric heterogeneities: comparison with Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk; Brink, Carsten; Clivio, Alessandro; Knöös, Tommy; Nicolini, Giorgia; Cozzi, Luca

    2007-03-01

    A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam with modified Batho correction; Nucletron Helax-TMS: the collapsed cone and the pencil beam with equivalent path length correction; CMS XiO: the multigrid superposition and the fast Fourier transform convolution; Philips Pinnacle: the collapsed cone. Monte Carlo simulations (MC) performed with the EGSnrc codes BEAMnrc and DOSxyznrc from NRCC in Ottawa were used as a benchmark. The study was carried out in simple geometrical water phantoms (ρ = 1.00 g cm-3) with inserts of different densities simulating light lung tissue (ρ = 0.035 g cm-3), normal lung (ρ = 0.20 g cm-3) and cortical bone tissue (ρ = 1.80 g cm-3). Experiments were performed for low- and high-energy photon beams (6 and 15 MV) and for square (13 × 13 cm2) and elongated rectangular (2.8 × 13 cm2) fields. Analysis was carried out on the basis of depth dose curves and transverse profiles at several depths. Assuming the MC data as reference, γ index analysis was carried out distinguishing between regions inside the non-water inserts or inside the uniform water. For this study, a distance to agreement was set to 3 mm while the dose difference varied from 2% to 10%. In general all algorithms based on pencil-beam convolutions showed a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (ρ = 0.035 g cm-3), enhanced for the most energetic beam. For denser, and more clinical, densities a better agreement among the sophisticated algorithms with respect to MC was observed.

  19. PIVET rFSH dosing algorithms for individualized controlled ovarian stimulation enables optimized pregnancy productivity rates and avoidance of ovarian hyperstimulation syndrome.

    PubMed

    Yovich, John L; Alsbjerg, Birgit; Conceicao, Jason L; Hinchliffe, Peter M; Keane, Kevin N

    2016-01-01

    The first PIVET algorithm for individualized recombinant follicle stimulating hormone (rFSH) dosing in in vitro fertilization, reported in 2012, was based on age and antral follicle count grading with adjustments for anti-Müllerian hormone level, body mass index, day-2 FSH, and smoking history. In 2007, it was enabled by the introduction of a metered rFSH pen allowing small dosage increments of ~8.3 IU per click. In 2011, a second rFSH pen was introduced allowing more precise dosages of 12.5 IU per click, and both pens with their individual algorithms have been applied continuously at our clinic. The objective of this observational study was to validate the PIVET algorithms pertaining to the two rFSH pens with the aim of collecting ≤15 oocytes and minimizing the risk of ovarian hyperstimulation syndrome. The data set included 2,822 in vitro fertilization stimulations over a 6-year period until April 2014 applying either of the two individualized dosing algorithms and corresponding pens. The main outcome measures were mean oocytes retrieved and resultant embryos designated for transfer or cryopreservation permitted calculation of oocyte and embryo utilization rates. Ensuing pregnancies were tracked until live births, and live birth productivity rates embracing fresh and frozen transfers were calculated. Overall, the results showed that mean oocyte numbers were 10.0 for all women <40 years with 24% requiring rFSH dosages <150 IU. Applying both specific algorithms in our clinic meant that the starting dose was not altered for 79.1% of patients and for 30.1% of those receiving the very lowest rFSH dosages (≤75 IU). Only 0.3% patients were diagnosed with severe ovarian hyperstimulation syndrome, all deemed avoidable due to definable breaches from the protocols. The live birth productivity rates exceeded 50% for women <35 years and was 33.2% for the group aged 35-39 years. Routine use of both algorithms led to only 11.6% of women generating >15 oocytes

  20. PIVET rFSH dosing algorithms for individualized controlled ovarian stimulation enables optimized pregnancy productivity rates and avoidance of ovarian hyperstimulation syndrome

    PubMed Central

    Yovich, John L; Alsbjerg, Birgit; Conceicao, Jason L; Hinchliffe, Peter M; Keane, Kevin N

    2016-01-01

    The first PIVET algorithm for individualized recombinant follicle stimulating hormone (rFSH) dosing in in vitro fertilization, reported in 2012, was based on age and antral follicle count grading with adjustments for anti-Müllerian hormone level, body mass index, day-2 FSH, and smoking history. In 2007, it was enabled by the introduction of a metered rFSH pen allowing small dosage increments of ~8.3 IU per click. In 2011, a second rFSH pen was introduced allowing more precise dosages of 12.5 IU per click, and both pens with their individual algorithms have been applied continuously at our clinic. The objective of this observational study was to validate the PIVET algorithms pertaining to the two rFSH pens with the aim of collecting ≤15 oocytes and minimizing the risk of ovarian hyperstimulation syndrome. The data set included 2,822 in vitro fertilization stimulations over a 6-year period until April 2014 applying either of the two individualized dosing algorithms and corresponding pens. The main outcome measures were mean oocytes retrieved and resultant embryos designated for transfer or cryopreservation permitted calculation of oocyte and embryo utilization rates. Ensuing pregnancies were tracked until live births, and live birth productivity rates embracing fresh and frozen transfers were calculated. Overall, the results showed that mean oocyte numbers were 10.0 for all women <40 years with 24% requiring rFSH dosages <150 IU. Applying both specific algorithms in our clinic meant that the starting dose was not altered for 79.1% of patients and for 30.1% of those receiving the very lowest rFSH dosages (≤75 IU). Only 0.3% patients were diagnosed with severe ovarian hyperstimulation syndrome, all deemed avoidable due to definable breaches from the protocols. The live birth productivity rates exceeded 50% for women <35 years and was 33.2% for the group aged 35–39 years. Routine use of both algorithms led to only 11.6% of women generating >15 oocytes

  1. Study of 201 Non-Small Cell Lung Cancer Patients Given Stereotactic Ablative Radiation Therapy Shows Local Control Dependence on Dose Calculation Algorithm

    SciTech Connect

    Latifi, Kujtim; Oliver, Jasmine; Baker, Ryan; Dilling, Thomas J.; Stevens, Craig W.; Kim, Jongphil; Yue, Binglin; DeMarco, MaryLou; Zhang, Geoffrey G.; Moros, Eduardo G.; Feygelman, Vladimir

    2014-04-01

    Purpose: Pencil beam (PB) and collapsed cone convolution (CCC) dose calculation algorithms differ significantly when used in the thorax. However, such differences have seldom been previously directly correlated with outcomes of lung stereotactic ablative body radiation (SABR). Methods and Materials: Data for 201 non-small cell lung cancer patients treated with SABR were analyzed retrospectively. All patients were treated with 50 Gy in 5 fractions of 10 Gy each. The radiation prescription mandated that 95% of the planning target volume (PTV) receive the prescribed dose. One hundred sixteen patients were planned with BrainLab treatment planning software (TPS) with the PB algorithm and treated on a Novalis unit. The other 85 were planned on the Pinnacle TPS with the CCC algorithm and treated on a Varian linac. Treatment planning objectives were numerically identical for both groups. The median follow-up times were 24 and 17 months for the PB and CCC groups, respectively. The primary endpoint was local/marginal control of the irradiated lesion. Gray's competing risk method was used to determine the statistical differences in local/marginal control rates between the PB and CCC groups. Results: Twenty-five patients planned with PB and 4 patients planned with the CCC algorithms to the same nominal doses experienced local recurrence. There was a statistically significant difference in recurrence rates between the PB and CCC groups (hazard ratio 3.4 [95% confidence interval: 1.18-9.83], Gray's test P=.019). The differences (Δ) between the 2 algorithms for target coverage were as follows: ΔD99{sub GITV} = 7.4 Gy, ΔD99{sub PTV} = 10.4 Gy, ΔV90{sub GITV} = 13.7%, ΔV90{sub PTV} = 37.6%, ΔD95{sub PTV} = 9.8 Gy, and ΔD{sub ISO} = 3.4 Gy. GITV = gross internal tumor volume. Conclusions: Local control in patients receiving who were planned to the same nominal dose with PB and CCC algorithms were statistically significantly different. Possible alternative

  2. SU-C-207-05: A Comparative Study of Noise-Reduction Algorithms for Low-Dose Cone-Beam Computed Tomography

    SciTech Connect

    Mukherjee, S; Yao, W

    2015-06-15

    Purpose: To study different noise-reduction algorithms and to improve the image quality of low dose cone beam CT for patient positioning in radiation therapy. Methods: In low-dose cone-beam CT, the reconstructed image is contaminated with excessive quantum noise. In this study, three well-developed noise reduction algorithms namely, a) penalized weighted least square (PWLS) method, b) split-Bregman total variation (TV) method, and c) compressed sensing (CS) method were studied and applied to the images of a computer–simulated “Shepp-Logan” phantom and a physical CATPHAN phantom. Up to 20% additive Gaussian noise was added to the Shepp-Logan phantom. The CATPHAN phantom was scanned by a Varian OBI system with 100 kVp, 4 ms and 20 mA. For comparing the performance of these algorithms, peak signal-to-noise ratio (PSNR) of the denoised images was computed. Results: The algorithms were shown to have the potential in reducing the noise level for low-dose CBCT images. For Shepp-Logan phantom, an improvement of PSNR of 2 dB, 3.1 dB and 4 dB was observed using PWLS, TV and CS respectively, while for CATPHAN, the improvement was 1.2 dB, 1.8 dB and 2.1 dB, respectively. Conclusion: Penalized weighted least square, total variation and compressed sensing methods were studied and compared for reducing the noise on a simulated phantom and a physical phantom scanned by low-dose CBCT. The techniques have shown promising results for noise reduction in terms of PSNR improvement. However, reducing the noise without compromising the smoothness and resolution of the image needs more extensive research.

  3. Validation of calculation algorithms for organ doses in CT by measurements on a 5 year old paediatric phantom

    NASA Astrophysics Data System (ADS)

    Dabin, Jérémie; Mencarelli, Alessandra; McMillan, Dayton; Romanyukha, Anna; Struelens, Lara; Lee, Choonsik

    2016-06-01

    Many organ dose calculation tools for computed tomography (CT) scans rely on the assumptions: (1) organ doses estimated for one CT scanner can be converted into organ doses for another CT scanner using the ratio of the Computed Tomography Dose Index (CTDI) between two CT scanners; and (2) helical scans can be approximated as the summation of axial slices covering the same scan range. The current study aims to validate experimentally these two assumptions. We performed organ dose measurements in a 5 year-old physical anthropomorphic phantom for five different CT scanners from four manufacturers. Absorbed doses to 22 organs were measured using thermoluminescent dosimeters for head-to-torso scans. We then compared the measured organ doses with the values calculated from the National Cancer Institute dosimetry system for CT (NCICT) computer program, developed at the National Cancer Institute. Whereas the measured organ doses showed significant variability (coefficient of variation (CoV) up to 53% at 80 kV) across different scanner models, the CoV of organ doses normalised to CTDIvol substantially decreased (12% CoV on average at 80 kV). For most organs, the difference between measured and simulated organ doses was within  ±20% except for the bone marrow, breasts and ovaries. The discrepancies were further explained by additional Monte Carlo calculations of organ doses using a voxel phantom developed from CT images of the physical phantom. The results demonstrate that organ doses calculated for one CT scanner can be used to assess organ doses from other CT scanners with 20% uncertainty (k  =  1), for the scan settings considered in the study.

  4. Validation of calculation algorithms for organ doses in CT by measurements on a 5 year old paediatric phantom.

    PubMed

    Dabin, Jérémie; Mencarelli, Alessandra; McMillan, Dayton; Romanyukha, Anna; Struelens, Lara; Lee, Choonsik

    2016-06-01

    Many organ dose calculation tools for computed tomography (CT) scans rely on the assumptions: (1) organ doses estimated for one CT scanner can be converted into organ doses for another CT scanner using the ratio of the Computed Tomography Dose Index (CTDI) between two CT scanners; and (2) helical scans can be approximated as the summation of axial slices covering the same scan range. The current study aims to validate experimentally these two assumptions. We performed organ dose measurements in a 5 year-old physical anthropomorphic phantom for five different CT scanners from four manufacturers. Absorbed doses to 22 organs were measured using thermoluminescent dosimeters for head-to-torso scans. We then compared the measured organ doses with the values calculated from the National Cancer Institute dosimetry system for CT (NCICT) computer program, developed at the National Cancer Institute. Whereas the measured organ doses showed significant variability (coefficient of variation (CoV) up to 53% at 80 kV) across different scanner models, the CoV of organ doses normalised to CTDIvol substantially decreased (12% CoV on average at 80 kV). For most organs, the difference between measured and simulated organ doses was within  ±20% except for the bone marrow, breasts and ovaries. The discrepancies were further explained by additional Monte Carlo calculations of organ doses using a voxel phantom developed from CT images of the physical phantom. The results demonstrate that organ doses calculated for one CT scanner can be used to assess organ doses from other CT scanners with 20% uncertainty (k  =  1), for the scan settings considered in the study. PMID:27192093

  5. Experimental validation of deterministic Acuros XB algorithm for IMRT and VMAT dose calculations with the Radiological Physics Center's head and neck phantom

    SciTech Connect

    Han Tao; Mourtada, Firas; Kisling, Kelly; Mikell, Justin; Followill, David; Howell, Rebecca

    2012-04-15

    Purpose: The purpose of this study was to verify the dosimetric performance of Acuros XB (AXB), a grid-based Boltzmann solver, in intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Methods: The Radiological Physics Center (RPC) head and neck (H and N) phantom was used for all calculations and measurements in this study. Clinically equivalent IMRT and VMAT plans were created on the RPC H and N phantom in the Eclipse treatment planning system (version 10.0) by using RPC dose prescription specifications. The dose distributions were calculated with two different algorithms, AXB 11.0.03 and anisotropic analytical algorithm (AAA) 10.0.24. Two dose report modes of AXB were recorded: dose-to-medium in medium (D{sub m,m}) and dose-to-water in medium (D{sub w,m}). Each treatment plan was delivered to the RPC phantom three times for reproducibility by using a Varian Clinac iX linear accelerator. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic registered EBT2 film, respectively. Profile comparison and 2D gamma analysis were used to quantify the agreement between the film measurements and the calculated dose distributions from both AXB and AAA. The computation times for AAA and AXB were also evaluated. Results: Good agreement was observed between measured doses and those calculated with AAA or AXB. Both AAA and AXB calculated doses within 5% of TLD measurements in both the IMRT and VMAT plans. Results of AXB{sub Dm,m} (0.1% to 3.6%) were slightly better than AAA (0.2% to 4.6%) or AXB{sub Dw,m} (0.3% to 5.1%). The gamma analysis for both AAA and AXB met the RPC 7%/4 mm criteria (over 90% passed), whereas AXB{sub Dm,m} met 5%/3 mm criteria in most cases. AAA was 2 to 3 times faster than AXB for IMRT, whereas AXB was 4-6 times faster than AAA for VMAT. Conclusions: AXB was found to be satisfactorily accurate when compared to measurements in the RPC H and N phantom. Compared with AAA

  6. Electron dose distributions caused by the contact-type metallic eye shield: Studies using Monte Carlo and pencil beam algorithms.

    PubMed

    Kang, Sei-Kwon; Yoon, Jai-Woong; Hwang, Taejin; Park, Soah; Cheong, Kwang-Ho; Han, Tae Jin; Kim, Haeyoung; Lee, Me-Yeon; Kim, Kyoung Ju; Bae, Hoonsik

    2015-01-01

    A metallic contact eye shield has sometimes been used for eyelid treatment, but dose distribution has never been reported for a patient case. This study aimed to show the shield-incorporated CT-based dose distribution using the Pinnacle system and Monte Carlo (MC) calculation for 3 patient cases. For the artifact-free CT scan, an acrylic shield machined as the same size as that of the tungsten shield was used. For the MC calculation, BEAMnrc and DOSXYZnrc were used for the 6-MeV electron beam of the Varian 21EX, in which information for the tungsten, stainless steel, and aluminum material for the eye shield was used. The same plan was generated on the Pinnacle system and both were compared. The use of the acrylic shield produced clear CT images, enabling delineation of the regions of interest, and yielded CT-based dose calculation for the metallic shield. Both the MC and the Pinnacle systems showed a similar dose distribution downstream of the eye shield, reflecting the blocking effect of the metallic eye shield. The major difference between the MC and the Pinnacle results was the target eyelid dose upstream of the shield such that the Pinnacle system underestimated the dose by 19 to 28% and 11 to 18% for the maximum and the mean doses, respectively. The pattern of dose difference between the MC and the Pinnacle systems was similar to that in the previous phantom study. In conclusion, the metallic eye shield was successfully incorporated into the CT-based planning, and the accurate dose calculation requires MC simulation. PMID:25724475

  7. Electron dose distributions caused by the contact-type metallic eye shield: Studies using Monte Carlo and pencil beam algorithms

    SciTech Connect

    Kang, Sei-Kwon; Yoon, Jai-Woong; Hwang, Taejin; Park, Soah; Cheong, Kwang-Ho; Jin Han, Tae; Kim, Haeyoung; Lee, Me-Yeon; Ju Kim, Kyoung Bae, Hoonsik

    2015-10-01

    A metallic contact eye shield has sometimes been used for eyelid treatment, but dose distribution has never been reported for a patient case. This study aimed to show the shield-incorporated CT-based dose distribution using the Pinnacle system and Monte Carlo (MC) calculation for 3 patient cases. For the artifact-free CT scan, an acrylic shield machined as the same size as that of the tungsten shield was used. For the MC calculation, BEAMnrc and DOSXYZnrc were used for the 6-MeV electron beam of the Varian 21EX, in which information for the tungsten, stainless steel, and aluminum material for the eye shield was used. The same plan was generated on the Pinnacle system and both were compared. The use of the acrylic shield produced clear CT images, enabling delineation of the regions of interest, and yielded CT-based dose calculation for the metallic shield. Both the MC and the Pinnacle systems showed a similar dose distribution downstream of the eye shield, reflecting the blocking effect of the metallic eye shield. The major difference between the MC and the Pinnacle results was the target eyelid dose upstream of the shield such that the Pinnacle system underestimated the dose by 19 to 28% and 11 to 18% for the maximum and the mean doses, respectively. The pattern of dose difference between the MC and the Pinnacle systems was similar to that in the previous phantom study. In conclusion, the metallic eye shield was successfully incorporated into the CT-based planning, and the accurate dose calculation requires MC simulation.

  8. SU-E-I-82: Improving CT Image Quality for Radiation Therapy Using Iterative Reconstruction Algorithms and Slightly Increasing Imaging Doses

    SciTech Connect

    Noid, G; Chen, G; Tai, A; Li, X

    2014-06-01

    Purpose: Iterative reconstruction (IR) algorithms are developed to improve CT image quality (IQ) by reducing noise without diminishing spatial resolution or contrast. For CT in radiation therapy (RT), slightly increasing imaging dose to improve IQ may be justified if it can substantially enhance structure delineation. The purpose of this study is to investigate and to quantify the IQ enhancement as a result of increasing imaging doses and using IR algorithms. Methods: CT images were acquired for phantoms, built to evaluate IQ metrics including spatial resolution, contrast and noise, with a variety of imaging protocols using a CT scanner (Definition AS Open, Siemens) installed inside a Linac room. Representative patients were scanned once the protocols were optimized. Both phantom and patient scans were reconstructed using the Sinogram Affirmed Iterative Reconstruction (SAFIRE) and the Filtered Back Projection (FBP) methods. IQ metrics of the obtained CTs were compared. Results: IR techniques are demonstrated to preserve spatial resolution as measured by the point spread function and reduce noise in comparison to traditional FBP. Driven by the reduction in noise, the contrast to noise ratio is doubled by adopting the highest SAFIRE strength. As expected, increasing imaging dose reduces noise for both SAFIRE and FBP reconstructions. The contrast to noise increases from 3 to 5 by increasing the dose by a factor of 4. Similar IQ improvement was observed on the CTs for selected patients with pancreas and prostrate cancers. Conclusion: The IR techniques produce a measurable enhancement to CT IQ by reducing the noise. Increasing imaging dose further reduces noise independent of the IR techniques. The improved CT enables more accurate delineation of tumors and/or organs at risk during RT planning and delivery guidance.

  9. SU-E-I-06: A Dose Calculation Algorithm for KV Diagnostic Imaging Beams by Empirical Modeling

    SciTech Connect

    Chacko, M; Aldoohan, S; Sonnad, J; Ahmad, S; Ali, I

    2015-06-15

    Purpose: To develop accurate three-dimensional (3D) empirical dose calculation model for kV diagnostic beams for different radiographic and CT imaging techniques. Methods: Dose was modeled using photon attenuation measured using depth dose (DD), scatter radiation of the source and medium, and off-axis ratio (OAR) profiles. Measurements were performed using single-diode in water and a diode-array detector (MapCHECK2) with kV on-board imagers (OBI) integrated with Varian TrueBeam and Trilogy linacs. The dose parameters were measured for three energies: 80, 100, and 125 kVp with and without bowtie filters using field sizes 1×1–40×40 cm2 and depths 0–20 cm in water tank. Results: The measured DD decreased with depth in water because of photon attenuation, while it increased with field size due to increased scatter radiation from medium. DD curves varied with energy and filters where they increased with higher energies and beam hardening from half-fan and full-fan bowtie filters. Scatter radiation factors increased with field sizes and higher energies. The OAR was with 3% for beam profiles within the flat dose regions. The heal effect of this kV OBI system was within 6% from the central axis value at different depths. The presence of bowtie filters attenuated measured dose off-axis by as much as 80% at the edges of large beams. The model dose predictions were verified with measured doses using single point diode and ionization chamber or two-dimensional diode-array detectors inserted in solid water phantoms. Conclusion: This empirical model enables fast and accurate 3D dose calculation in water within 5% in regions with near charge-particle equilibrium conditions outside buildup region and penumbra. It considers accurately scatter radiation contribution in water which is superior to air-kerma or CTDI dose measurements used usually in dose calculation for diagnostic imaging beams. Considering heterogeneity corrections in this model will enable patient specific dose

  10. High-Pitch Computed Tomography Coronary Angiography—A New Dose-Saving Algorithm: Estimation of Radiation Exposure

    PubMed Central

    Ketelsen, Dominik; Buchgeister, Markus; Korn, Andreas; Fenchel, Michael; Schmidt, Bernhard; Flohr, Thomas G.; Thomas, Christoph; Schabel, Christoph; Tsiflikas, Ilias; Syha, Roland; Claussen, Claus D.; Heuschmid, Martin

    2012-01-01

    Purpose. To estimate effective dose and organ equivalent doses of prospective ECG-triggered high-pitch CTCA. Materials and Methods. For dose measurements, an Alderson-Rando phantom equipped with thermoluminescent dosimeters was used. The effective dose was calculated according to ICRP 103. Exposure was performed on a second-generation dual-source scanner (SOMATOM Definition Flash, Siemens Medical Solutions, Germany). The following scan parameters were used: 320 mAs per rotation, 100 and 120 kV, pitch 3.4 for prospectively ECG-triggered high-pitch CTCA, scan range of 13.5 cm, collimation 64 × 2 × 0.6 mm with z-flying focal spot, gantry rotation time 280 ms, and simulated heart rate of 60 beats per minute. Results. Depending on the applied tube potential, the effective whole-body dose of the cardiac scan ranged from 1.1 mSv to 1.6 mSv and from 1.2 to 1.8 mSv for males and females, respectively. The radiosensitive breast tissue in the range of the primary beam caused an increased female-specific effective dose of 8.6%±0.3% compared to males. Decreasing the tube potential, a significant reduction of the effective dose of 35.8% and 36.0% can be achieved for males and females, respectively (P < 0.001). Conclusion. The radiologist and the CT technician should be aware of this new dose-saving strategy to keep the radiation exposure as low as reasonablly achievable. PMID:22701793

  11. On the use of Gafchromic EBT3 films for validating a commercial electron Monte Carlo dose calculation algorithm

    NASA Astrophysics Data System (ADS)

    Chan, EuJin; Lydon, Jenny; Kron, Tomas

    2015-03-01

    This study aims to investigate the effects of oblique incidence, small field size and inhomogeneous media on the electron dose distribution, and to compare calculated (Elekta/CMS XiO) and measured results. All comparisons were done in terms of absolute dose. A new measuring method was developed for high resolution, absolute dose measurement of non-standard beams using Gafchromic® EBT3 film. A portable U-shaped holder was designed and constructed to hold EBT3 films vertically in a reproducible setup submerged in a water phantom. The experimental film method was verified with ionisation chamber measurements and agreed to within 2% or 1 mm. Agreement between XiO electron Monte Carlo (eMC) and EBT3 was within 2% or 2 mm for most standard fields and 3% or 3 mm for the non-standard fields. Larger differences were seen in the build-up region where XiO eMC overestimates dose by up to 10% for obliquely incident fields and underestimates the dose for small circular fields by up to 5% when compared to measurement. Calculations with inhomogeneous media mimicking ribs, lung and skull tissue placed at the side of the film in water agreed with measurement to within 3% or 3 mm. Gafchromic film in water proved to be a convenient high spatial resolution method to verify dose distributions from electrons in non-standard conditions including irradiation in inhomogeneous media.

  12. Comparison of Planned Dose Distributions Calculated by Monte Carlo and Ray-Trace Algorithms for the Treatment of Lung Tumors With CyberKnife: A Preliminary Study in 33 Patients

    SciTech Connect

    Wilcox, Ellen E.; Daskalov, George M.; Lincoln, Holly; Shumway, Richard C.; Kaplan, Bruce M.; Colasanto, Joseph M.

    2010-05-01

    Purpose: To compare dose distributions calculated using the Monte Carlo algorithm (MC) and Ray-Trace algorithm (effective path length method, EPL) for CyberKnife treatments of lung tumors. Materials and Methods: An acceptable treatment plan is created using Multiplan 2.1 and MC dose calculation. Dose is prescribed to the isodose line encompassing 95% of the planning target volume (PTV) and this is the plan clinically delivered. For comparison, the Ray-Trace algorithm with heterogeneity correction (EPL) is used to recalculate the dose distribution for this plan using the same beams, beam directions, and monitor units (MUs). Results: The maximum doses calculated by the EPL to target PTV are uniformly larger than the MC plans by up to a factor of 1.63. Up to a factor of four larger maximum dose differences are observed for the critical structures in the chest. More beams traversing larger distances through low density lung are associated with larger differences, consistent with the fact that the EPL overestimates doses in low-density structures and this effect is more pronounced as collimator size decreases. Conclusions: We establish that changing the treatment plan calculation algorithm from EPL to MC can produce large differences in target and critical organs' dose coverage. The observed discrepancies are larger for plans using smaller collimator sizes and have strong dependency on the anatomical relationship of target-critical structures.

  13. Development and Evaluation of a New Air Exchange Rate Algorithm for the Stochastic Human Exposure and Dose Simulation Model

    EPA Science Inventory

    between-home and between-city variability in residential pollutant infiltration. This is likely a result of differences in home ventilation, or air exchange rates (AER). The Stochastic Human Exposure and Dose Simulation (SHEDS) model is a population exposure model that uses a pro...

  14. Algorithm for x-ray beam hardening and scatter correction in low-dose cone-beam CT: phantom studies

    NASA Astrophysics Data System (ADS)

    Liu, Wenlei; Rong, Junyan; Gao, Peng; Liao, Qimei; Lu, HongBing

    2016-03-01

    X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), as well as beam hardening, resulting in image artifacts, contrast reduction, and lack of CT number accuracy. Meanwhile the x-ray radiation dose is also non-ignorable. Considerable scatter or beam hardening correction methods have been developed, independently, and rarely combined with low-dose CT reconstruction. In this paper, we combine scatter suppression with beam hardening correction for sparse-view CT reconstruction to improve CT image quality and reduce CT radiation. Firstly, scatter was measured, estimated, and removed using measurement-based methods, assuming that signal in the lead blocker shadow is only attributable to x-ray scatter. Secondly, beam hardening was modeled by estimating an equivalent attenuation coefficient at the effective energy, which was integrated into the forward projector of the algebraic reconstruction technique (ART). Finally, the compressed sensing (CS) iterative reconstruction is carried out for sparse-view CT reconstruction to reduce the CT radiation. Preliminary Monte Carlo simulated experiments indicate that with only about 25% of conventional dose, our method reduces the magnitude of cupping artifact by a factor of 6.1, increases the contrast by a factor of 1.4 and the CNR by a factor of 15. The proposed method could provide good reconstructed image from a few view projections, with effective suppression of artifacts caused by scatter and beam hardening, as well as reducing the radiation dose. With this proposed framework and modeling, it may provide a new way for low-dose CT imaging.

  15. Assessment of the dose reduction potential of a model-based iterative reconstruction algorithm using a task-based performance metrology

    SciTech Connect

    Samei, Ehsan; Richard, Samuel

    2015-01-15

    indicated a 46%–84% dose reduction potential, depending on task, without compromising the modeled detection performance. Conclusions: The presented methodology based on ACR phantom measurements extends current possibilities for the assessment of CT image quality under the complex resolution and noise characteristics exhibited with statistical and iterative reconstruction algorithms. The findings further suggest that MBIR can potentially make better use of the projections data to reduce CT dose by approximately a factor of 2. Alternatively, if the dose held unchanged, it can improve image quality by different levels for different tasks.

  16. Commissioning and Validation of the First Monte Carlo Based Dose Calculation Algorithm Commercial Treatment Planning System in Mexico

    SciTech Connect

    Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Hernandez-Bojorquez, M.; Galvan de la Cruz, O. O.; Ballesteros-Zebadua, P.

    2010-12-07

    This work presents the beam data commissioning and dose calculation validation of the first Monte Carlo (MC) based treatment planning system (TPS) installed in Mexico. According to the manufacturer specifications, the beam data commissioning needed for this model includes: several in-air and water profiles, depth dose curves, head-scatter factors and output factors (6x6, 12x12, 18x18, 24x24, 42x42, 60x60, 80x80 and 100x100 mm{sup 2}). Radiographic and radiochromic films, diode and ionization chambers were used for data acquisition. MC dose calculations in a water phantom were used to validate the MC simulations using comparisons with measured data. Gamma index criteria 2%/2 mm were used to evaluate the accuracy of MC calculations. MC calculated data show an excellent agreement for field sizes from 18x18 to 100x100 mm{sup 2}. Gamma analysis shows that in average, 95% and 100% of the data passes the gamma index criteria for these fields, respectively. For smaller fields (12x12 and 6x6 mm{sup 2}) only 92% of the data meet the criteria. Total scatter factors show a good agreement (<2.6%) between MC calculated and measured data, except for the smaller fields (12x12 and 6x6 mm{sup 2}) that show a error of 4.7%. MC dose calculations are accurate and precise for clinical treatment planning up to a field size of 18x18 mm{sup 2}. Special care must be taken for smaller fields.

  17. SU-E-J-85: Leave-One-Out Perturbation (LOOP) Fitting Algorithm for Absolute Dose Film Calibration

    SciTech Connect

    Chu, A; Ahmad, M; Chen, Z; Nath, R

    2014-06-01

    Purpose: To introduce an outliers-recognition fitting routine for film dosimetry. It cannot only be flexible with any linear and non-linear regression but also can provide information for the minimal number of sampling points, critical sampling distributions and evaluating analytical functions for absolute film-dose calibration. Methods: The technique, leave-one-out (LOO) cross validation, is often used for statistical analyses on model performance. We used LOO analyses with perturbed bootstrap fitting called leave-one-out perturbation (LOOP) for film-dose calibration . Given a threshold, the LOO process detects unfit points (“outliers”) compared to other cohorts, and a bootstrap fitting process follows to seek any possibilities of using perturbations for further improvement. After that outliers were reconfirmed by a traditional t-test statistics and eliminated, then another LOOP feedback resulted in the final. An over-sampled film-dose- calibration dataset was collected as a reference (dose range: 0-800cGy), and various simulated conditions for outliers and sampling distributions were derived from the reference. Comparisons over the various conditions were made, and the performance of fitting functions, polynomial and rational functions, were evaluated. Results: (1) LOOP can prove its sensitive outlier-recognition by its statistical correlation to an exceptional better goodness-of-fit as outliers being left-out. (2) With sufficient statistical information, the LOOP can correct outliers under some low-sampling conditions that other “robust fits”, e.g. Least Absolute Residuals, cannot. (3) Complete cross-validated analyses of LOOP indicate that the function of rational type demonstrates a much superior performance compared to the polynomial. Even with 5 data points including one outlier, using LOOP with rational function can restore more than a 95% value back to its reference values, while the polynomial fitting completely failed under the same conditions

  18. Development of a golden beam data set for the commissioning of a proton double-scattering system in a pencil-beam dose calculation algorithm

    SciTech Connect

    Slopsema, R. L. Flampouri, S.; Yeung, D.; Li, Z.; Lin, L.; McDonough, J. E.; Palta, J.

    2014-09-15

    Purpose: The purpose of this investigation is to determine if a single set of beam data, described by a minimal set of equations and fitting variables, can be used to commission different installations of a proton double-scattering system in a commercial pencil-beam dose calculation algorithm. Methods: The beam model parameters required to commission the pencil-beam dose calculation algorithm (virtual and effective SAD, effective source size, and pristine-peak energy spread) are determined for a commercial double-scattering system. These parameters are measured in a first room and parameterized as function of proton energy and nozzle settings by fitting four analytical equations to the measured data. The combination of these equations and fitting values constitutes the golden beam data (GBD). To determine the variation in dose delivery between installations, the same dosimetric properties are measured in two additional rooms at the same facility, as well as in a single room at another facility. The difference between the room-specific measurements and the GBD is evaluated against tolerances that guarantee the 3D dose distribution in each of the rooms matches the GBD-based dose distribution within clinically reasonable limits. The pencil-beam treatment-planning algorithm is commissioned with the GBD. The three-dimensional dose distribution in water is evaluated in the four treatment rooms and compared to the treatment-planning calculated dose distribution. Results: The virtual and effective SAD measurements fall between 226 and 257 cm. The effective source size varies between 2.4 and 6.2 cm for the large-field options, and 1.0 and 2.0 cm for the small-field options. The pristine-peak energy spread decreases from 1.05% at the lowest range to 0.6% at the highest. The virtual SAD as well as the effective source size can be accurately described by a linear relationship as function of the inverse of the residual energy. An additional linear correction term as function of

  19. A 3D superposition pencil beam dose calculation algorithm for a 60Co therapy unit and its verification by MC simulation

    NASA Astrophysics Data System (ADS)

    Koncek, O.; Krivonoska, J.

    2014-11-01

    The MCNP Monte Carlo code was used to simulate the collimating system of the 60Co therapy unit to calculate the primary and scattered photon fluences as well as the electron contamination incident to the isocentric plane as the functions of the irradiation field size. Furthermore, a Monte Carlo simulation for the polyenergetic Pencil Beam Kernels (PBKs) generation was performed using the calculated photon and electron spectra. The PBK was analytically fitted to speed up the dose calculation using the convolution technique in the homogeneous media. The quality of the PBK fit was verified by comparing the calculated and simulated 60Co broad beam profiles and depth dose curves in a homogeneous water medium. The inhomogeneity correction coefficients were derived from the PBK simulation of an inhomogeneous slab phantom consisting of various materials. The inhomogeneity calculation model is based on the changes in the PBK radial displacement and on the change of the forward and backward electron scattering. The inhomogeneity correction is derived from the electron density values gained from a complete 3D CT array and considers different electron densities through which the pencil beam is propagated as well as the electron density values located between the interaction point and the point of dose deposition. Important aspects and details of the algorithm implementation are also described in this study.

  20. Testing the GLAaS algorithm for dose measurements on low- and high-energy photon beams using an amorphous silicon portal imager

    SciTech Connect

    Nicolini, Giorgia; Fogliata, Antonella; Vanetti, Eugenio; Clivio, Alessandro; Vetterli, Daniel; Cozzi, Luca

    2008-02-15

    The GLAaS algorithm for pretreatment intensity modulation radiation therapy absolute dose verification based on the use of amorphous silicon detectors, as described in Nicolini et al. [G. Nicolini, A. Fogliata, E. Vanetti, A. Clivio, and L. Cozzi, Med. Phys. 33, 2839-2851 (2006)], was tested under a variety of experimental conditions to investigate its robustness, the possibility of using it in different clinics and its performance. GLAaS was therefore tested on a low-energy Varian Clinac (6 MV) equipped with an amorphous silicon Portal Vision PV-aS500 with electronic readout IAS2 and on a high-energy Clinac (6 and 15 MV) equipped with a PV-aS1000 and IAS3 electronics. Tests were performed for three calibration conditions: A: adding buildup on the top of the cassette such that SDD-SSD=d{sub max} and comparing measurements with corresponding doses computed at d{sub max}, B: without adding any buildup on the top of the cassette and considering only the intrinsic water-equivalent thickness of the electronic portal imaging devices device (0.8 cm), and C: without adding any buildup on the top of the cassette but comparing measurements against doses computed at d{sub max}. This procedure is similar to that usually applied when in vivo dosimetry is performed with solid state diodes without sufficient buildup material. Quantitatively, the gamma index ({gamma}), as described by Low et al. [D. A. Low, W. B. Harms, S. Mutic, and J. A. Purdy, Med. Phys. 25, 656-660 (1998)], was assessed. The {gamma} index was computed for a distance to agreement (DTA) of 3 mm. The dose difference {delta}D was considered as 2%, 3%, and 4%. As a measure of the quality of results, the fraction of field area with gamma larger than 1 (%FA) was scored. Results over a set of 50 test samples (including fields from head and neck, breast, prostate, anal canal, and brain cases) and from the long-term routine usage, demonstrated the robustness and stability of GLAaS. In general, the mean values of %FA

  1. Raman spectroscopy for the analytical quality control of low-dose break-scored tablets.

    PubMed

    Gómez, Diego A; Coello, Jordi; Maspoch, Santiago

    2016-05-30

    Quality control of solid dosage forms involves the analysis of end products according to well-defined criteria, including the assessment of the uniformity of dosage units (UDU). However, in the case of break-scored tablets, given that tablet splitting is widespread as a means to adjust doses, the uniform distribution of the active pharmaceutical ingredient (API) in all the possible fractions of the tablet must be assessed. A general procedure to accomplish with both issues, using Raman spectroscopy, is presented. It is based on the acquisition of a collection of spectra in different regions of the tablet, that later can be selected to determine the amount of API in the potential fractions that can result after splitting. The procedure has been applied to two commercial products, Sintrom 1 and Sintrom 4, with API (acenocoumarol) mass proportion of 2% and 0.7% respectively. Partial Least Squares (PLS) calibration models were constructed for the quantification of acenocoumarol in whole tablets using HPLC as a reference analytical method. Once validated, the calibration models were used to determine the API content in the different potential fragments of the scored Sintrom 4 tablets. Fragment mass measurements were also performed to estimate the range of masses of the halves and quarters that could result after tablet splitting. The results show that Raman spectroscopy can be an alternative analytical procedure to assess the uniformity of content, both in whole tablets as in its potential fragments, and that Sintrom 4 tablets can be perfectly split in halves, but some cautions have to be taken when considering the fragmentation in quarters. A practical alternative to the use of UDU test for the assessment of tablet fragments is proposed. PMID:26962721

  2. SU-E-I-89: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Pediatric Anthropomorphic and ACR Phantoms

    SciTech Connect

    Mahmood, U; Erdi, Y; Wang, W

    2014-06-01

    Purpose: To assess the impact of General Electrics automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of a pediatric anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, 80 mA, 0.7s rotation time. Image quality was assessed by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: For the baseline protocol, CNR was found to decrease from 0.460 ± 0.182 to 0.420 ± 0.057 when kVa was activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.620 ± 0.040. The liver dose decreased by 30% with kVa activation. Conclusion: Application of kVa reduces the liver dose up to 30%. However, reduction in image quality for abdominal scans occurs when using the automated tube voltage selection feature at the baseline protocol. As demonstrated by the CNR and NPS analysis, the texture and magnitude of the noise in reconstructed images at ASiR 40% was found to be the same as our baseline images. We have demonstrated that 30% dose reduction is possible when using 40% ASiR with kVa in pediatric patients.

  3. Prediction of human observer performance in a 2-alternative forced choice low-contrast detection task using channelized Hotelling observer: Impact of radiation dose and reconstruction algorithms

    SciTech Connect

    Yu Lifeng; Leng Shuai; Chen Lingyun; Kofler, James M.; McCollough, Cynthia H.; Carter, Rickey E.

    2013-04-15

    Purpose: Efficient optimization of CT protocols demands a quantitative approach to predicting human observer performance on specific tasks at various scan and reconstruction settings. The goal of this work was to investigate how well a channelized Hotelling observer (CHO) can predict human observer performance on 2-alternative forced choice (2AFC) lesion-detection tasks at various dose levels and two different reconstruction algorithms: a filtered-backprojection (FBP) and an iterative reconstruction (IR) method. Methods: A 35 Multiplication-Sign 26 cm{sup 2} torso-shaped phantom filled with water was used to simulate an average-sized patient. Three rods with different diameters (small: 3 mm; medium: 5 mm; large: 9 mm) were placed in the center region of the phantom to simulate small, medium, and large lesions. The contrast relative to background was -15 HU at 120 kV. The phantom was scanned 100 times using automatic exposure control each at 60, 120, 240, 360, and 480 quality reference mAs on a 128-slice scanner. After removing the three rods, the water phantom was again scanned 100 times to provide signal-absent background images at the exact same locations. By extracting regions of interest around the three rods and on the signal-absent images, the authors generated 21 2AFC studies. Each 2AFC study had 100 trials, with each trial consisting of a signal-present image and a signal-absent image side-by-side in randomized order. In total, 2100 trials were presented to both the model and human observers. Four medical physicists acted as human observers. For the model observer, the authors used a CHO with Gabor channels, which involves six channel passbands, five orientations, and two phases, leading to a total of 60 channels. The performance predicted by the CHO was compared with that obtained by four medical physicists at each 2AFC study. Results: The human and model observers were highly correlated at each dose level for each lesion size for both FBP and IR. The

  4. SU-E-I-81: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Adult Anthropomorphic and ACR Phantoms

    SciTech Connect

    Mahmood, U; Erdi, Y; Wang, W

    2014-06-01

    Purpose: To assess the impact of General Electrics (GE) automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of an adult anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, Auto mA (180 to 380 mA), noise index (NI) = 14, adaptive iterative statistical reconstruction (ASiR) of 20%, 0.8s rotation time. Image quality was evaluated by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: The CNR for the adult male was found to decrease from CNR = 0.912 ± 0.045 for the baseline protocol without kVa to a CNR = 0.756 ± 0.049 with kVa activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.903 ± 0.023. The difference in the central liver dose with and without kVa was found to be 0.07%. Conclusion: Dose reduction was insignificant in the adult phantom. As determined by NPS analysis, ASiR of 40% produced images with similar noise texture to the baseline protocol. However, the CNR at ASiR of 40% with kVa fails to meet the current ACR CNR passing requirement of 1.0.

  5. Validation of a method for in vivo 3D dose reconstruction for IMRT and VMAT treatments using on-treatment EPID images and a model-based forward-calculation algorithm

    SciTech Connect

    Van Uytven, Eric Van Beek, Timothy; McCowan, Peter M.; Chytyk-Praznik, Krista; Greer, Peter B.; McCurdy, Boyd M. C.

    2015-12-15

    Purpose: Radiation treatments are trending toward delivering higher doses per fraction under stereotactic radiosurgery and hypofractionated treatment regimens. There is a need for accurate 3D in vivo patient dose verification using electronic portal imaging device (EPID) measurements. This work presents a model-based technique to compute full three-dimensional patient dose reconstructed from on-treatment EPID portal images (i.e., transmission images). Methods: EPID dose is converted to incident fluence entering the patient using a series of steps which include converting measured EPID dose to fluence at the detector plane and then back-projecting the primary source component of the EPID fluence upstream of the patient. Incident fluence is then recombined with predicted extra-focal fluence and used to calculate 3D patient dose via a collapsed-cone convolution method. This method is implemented in an iterative manner, although in practice it provides accurate results in a single iteration. The robustness of the dose reconstruction technique is demonstrated with several simple slab phantom and nine anthropomorphic phantom cases. Prostate, head and neck, and lung treatments are all included as well as a range of delivery techniques including VMAT and dynamic intensity modulated radiation therapy (IMRT). Results: Results indicate that the patient dose reconstruction algorithm compares well with treatment planning system computed doses for controlled test situations. For simple phantom and square field tests, agreement was excellent with a 2%/2 mm 3D chi pass rate ≥98.9%. On anthropomorphic phantoms, the 2%/2 mm 3D chi pass rates ranged from 79.9% to 99.9% in the planning target volume (PTV) region and 96.5% to 100% in the low dose region (>20% of prescription, excluding PTV and skin build-up region). Conclusions: An algorithm to reconstruct delivered patient 3D doses from EPID exit dosimetry measurements was presented. The method was applied to phantom and patient

  6. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  7. SU-E-T-579: On the Relative Sensitivity of Monte Carlo and Pencil Beam Dose Calculation Algorithms to CT Metal Artifacts in Volumetric-Modulated Arc Spine Radiosurgery (RS)

    SciTech Connect

    Wong, M; Lee, V; Leung, R; Lee, K; Law, G; Tung, S; Chan, M; Blanck, O

    2015-06-15

    Purpose: Investigating the relative sensitivity of Monte Carlo (MC) and Pencil Beam (PB) dose calculation algorithms to low-Z (titanium) metallic artifacts is important for accurate and consistent dose reporting in post¬operative spinal RS. Methods: Sensitivity analysis of MC and PB dose calculation algorithms on the Monaco v.3.3 treatment planning system (Elekta CMS, Maryland Heights, MO, USA) was performed using CT images reconstructed without (plain) and with Orthopedic Metal Artifact Reduction (OMAR; Philips Healthcare system, Cleveland, OH, USA). 6MV and 10MV volumetric-modulated arc (VMAT) RS plans were obtained for MC and PB on the plain and OMAR images (MC-plain/OMAR and PB-plain/OMAR). Results: Maximum differences in dose to 0.2cc (D0.2cc) of spinal cord and cord +2mm for 6MV and 10MV VMAT plans were 0.1Gy between MC-OMAR and MC-plain, and between PB-OMAR and PB-plain. Planning target volume (PTV) dose coverage changed by 0.1±0.7% and 0.2±0.3% for 6MV and 10MV from MC-OMAR to MC-plain, and by 0.1±0.1% for both 6MV and 10 MV from PB-OMAR to PB-plain, respectively. In no case for both MC and PB the D0.2cc to spinal cord was found to exceed the planned tolerance changing from OMAR to plain CT in dose calculations. Conclusion: Dosimetric impacts of metallic artifacts caused by low-Z metallic spinal hardware (mainly titanium alloy) are not clinically important in VMAT-based spine RS, without significant dependence on dose calculation methods (MC and PB) and photon energy ≥ 6MV. There is no need to use one algorithm instead of the other to reduce uncertainty for dose reporting. The dose calculation method that should be used in spine RS shall be consistent with the usual clinical practice.

  8. Population Pharmacokinetics of Busulfan in Pediatric and Young Adult Patients Undergoing Hematopoietic Cell Transplant: A Model-Based Dosing Algorithm for Personalized Therapy and Implementation into Routine Clinical Use

    PubMed Central

    Long-Boyle, Janel; Savic, Rada; Yan, Shirley; Bartelink, Imke; Musick, Lisa; French, Deborah; Law, Jason; Horn, Biljana; Cowan, Morton J.; Dvorak, Christopher C.

    2014-01-01

    Background Population pharmacokinetic (PK) studies of busulfan in children have shown that individualized model-based algorithms provide improved targeted busulfan therapy when compared to conventional dosing. The adoption of population PK models into routine clinical practice has been hampered by the tendency of pharmacologists to develop complex models too impractical for clinicians to use. The authors aimed to develop a population PK model for busulfan in children that can reliably achieve therapeutic exposure (concentration-at-steady-state, Css) and implement a simple, model-based tool for the initial dosing of busulfan in children undergoing HCT. Patients and Methods Model development was conducted using retrospective data available in 90 pediatric and young adult patients who had undergone HCT with busulfan conditioning. Busulfan drug levels and potential covariates influencing drug exposure were analyzed using the non-linear mixed effects modeling software, NONMEM. The final population PK model was implemented into a clinician-friendly, Microsoft Excel-based tool and used to recommend initial doses of busulfan in a group of 21 pediatric patients prospectively dosed based on the population PK model. Results Modeling of busulfan time-concentration data indicates busulfan CL displays non-linearity in children, decreasing up to approximately 20% between the concentrations of 250–2000 ng/mL. Important patient-specific covariates found to significantly impact busulfan CL were actual body weight and age. The percentage of individuals achieving a therapeutic Css was significantly higher in subjects receiving initial doses based on the population PK model (81%) versus historical controls dosed on conventional guidelines (52%) (p = 0.02). Conclusion When compared to the conventional dosing guidelines, the model-based algorithm demonstrates significant improvement for providing targeted busulfan therapy in children and young adults. PMID:25162216

  9. SU-F-BRD-15: The Impact of Dose Calculation Algorithm and Hounsfield Units Conversion Tables On Plan Dosimetry for Lung SBRT

    SciTech Connect

    Kuo, L; Yorke, E; Lim, S; Mechalakos, J; Rimner, A

    2014-06-15

    Purpose: To assess dosimetric differences in IMRT lung stereotactic body radiotherapy (SBRT) plans calculated with Varian AAA and Acuros (AXB) and with vendor-supplied (V) versus in-house (IH) measured Hounsfield units (HU) to mass and HU to electron density conversion tables. Methods: In-house conversion tables were measured using Gammex 472 density-plug phantom. IMRT plans (6 MV, Varian TrueBeam, 6–9 coplanar fields) meeting departmental coverage and normal tissue constraints were retrospectively generated for 10 lung SBRT cases using Eclipse Vn 10.0.28 AAA with in-house tables (AAA/IH). Using these monitor units and MLC sequences, plans were recalculated with AAA and vendor tables (AAA/V) and with AXB with both tables (AXB/IH and AXB/V). Ratios to corresponding AAA/IH values were calculated for PTV D95, D01, D99, mean-dose, total and ipsilateral lung V20 and chestwall V30. Statistical significance of differences was judged by Wilcoxon Signed Rank Test (p<0.05). Results: For HU<−400 the vendor HU-mass density table was notably below the IH table. PTV D95 ratios to AAA/IH, averaged over all patients, are 0.963±0.073 (p=0.508), 0.914±0.126 (p=0.011), and 0.998±0.001 (p=0.005) for AXB/IH, AXB/V and AAA/V respectively. Total lung V20 ratios are 1.006±0.046 (p=0.386), 0.975±0.080 (p=0.514) and 0.998±0.002 (p=0.007); ipsilateral lung V20 ratios are 1.008±0.041(p=0.284), 0.977±0.076 (p=0.443), and 0.998±0.018 (p=0.005) for AXB/IH, AXB/V and AAA/V respectively. In 7 cases, ratios to AAA/IH were within ± 5% for all indices studied. For 3 cases characterized by very low lung density and small PTV (19.99±8.09 c.c.), PTV D95 ratio for AXB/V ranged from 67.4% to 85.9%, AXB/IH D95 ratio ranged from 81.6% to 93.4%; there were large differences in other studied indices. Conclusion: For AXB users, careful attention to HU conversion tables is important, as they can significantly impact AXB (but not AAA) lung SBRT plans. Algorithm selection is also important for

  10. Dose specification for radiation therapy: dose to water or dose to medium?

    PubMed

    Ma, C-M; Li, Jinsheng

    2011-05-21

    The Monte Carlo method enables accurate dose calculation for radiation therapy treatment planning and has been implemented in some commercial treatment planning systems. Unlike conventional dose calculation algorithms that provide patient dose information in terms of dose to water with variable electron density, the Monte Carlo method calculates the energy deposition in different media and expresses dose to a medium. This paper discusses the differences in dose calculated using water with different electron densities and that calculated for different biological media and the clinical issues on dose specification including dose prescription and plan evaluation using dose to water and dose to medium. We will demonstrate that conventional photon dose calculation algorithms compute doses similar to those simulated by Monte Carlo using water with different electron densities, which are close (<4% differences) to doses to media but significantly different (up to 11%) from doses to water converted from doses to media following American Association of Physicists in Medicine (AAPM) Task Group 105 recommendations. Our results suggest that for consistency with previous radiation therapy experience Monte Carlo photon algorithms report dose to medium for radiotherapy dose prescription, treatment plan evaluation and treatment outcome analysis. PMID:21508447

  11. Stereotactic Body Radiotherapy for Primary Lung Cancer at a Dose of 50 Gy Total in Five Fractions to the Periphery of the Planning Target Volume Calculated Using a Superposition Algorithm

    SciTech Connect

    Takeda, Atsuya; Sanuki, Naoko; Kunieda, Etsuo Ohashi, Toshio; Oku, Yohei; Takeda, Toshiaki; Shigematsu, Naoyuki; Kubo, Atsushi

    2009-02-01

    Purpose: To retrospectively analyze the clinical outcomes of stereotactic body radiotherapy (SBRT) for patients with Stages 1A and 1B non-small-cell lung cancer. Methods and Materials: We reviewed the records of patients with non-small-cell lung cancer treated with curative intent between Dec 2001 and May 2007. All patients had histopathologically or cytologically confirmed disease, increased levels of tumor markers, and/or positive findings on fluorodeoxyglucose positron emission tomography. Staging studies identified their disease as Stage 1A or 1B. Performance status was 2 or less according to World Health Organization guidelines in all cases. The prescribed dose of 50 Gy total in five fractions, calculated by using a superposition algorithm, was defined for the periphery of the planning target volume. Results: One hundred twenty-one patients underwent SBRT during the study period, and 63 were eligible for this analysis. Thirty-eight patients had Stage 1A (T1N0M0) and 25 had Stage 1B (T2N0M0). Forty-nine patients were not appropriate candidates for surgery because of chronic pulmonary disease. Median follow-up of these 49 patients was 31 months (range, 10-72 months). The 3-year local control, disease-free, and overall survival rates in patients with Stages 1A and 1B were 93% and 96% (p = 0.86), 76% and 77% (p = 0.83), and 90% and 63% (p = 0.09), respectively. No acute toxicity was observed. Grade 2 or higher radiation pneumonitis was experienced by 3 patients, and 1 of them had fatal bacterial pneumonia. Conclusions: The SBRT at 50 Gy total in five fractions to the periphery of the planning target volume calculated by using a superposition algorithm is feasible. High local control rates were achieved for both T2 and T1 tumors.

  12. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  13. MO-A-BRD-09: A Data-Mining Algorithm for Large Scale Analysis of Dose-Outcome Relationships in a Database of Irradiated Head-And-Neck (HN) Cancer Patients

    SciTech Connect

    Robertson, SP; Quon, H; Kiess, AP; Moore, JA; Yang, W; Cheng, Z; Sharabi, A; McNutt, TR

    2014-06-15

    Purpose: To develop a framework for automatic extraction of clinically meaningful dosimetric-outcome relationships from an in-house, analytic oncology database. Methods: Dose-volume histograms (DVH) and clinical outcome-related structured data elements have been routinely stored to our database for 513 HN cancer patients treated from 2007 to 2014. SQL queries were developed to extract outcomes that had been assessed for at least 100 patients, as well as DVH curves for organs-at-risk (OAR) that were contoured for at least 100 patients. DVH curves for paired OAR (e.g., left and right parotids) were automatically combined and included as additional structures for analysis. For each OAR-outcome combination, DVH dose points, D(V{sub t}), at a series of normalized volume thresholds, V{sub t}=[0.01,0.99], were stratified into two groups based on outcomes after treatment completion. The probability, P[D(V{sub t})], of an outcome was modeled at each V{sub t} by logistic regression. Notable combinations, defined as having P[D(V{sub t})] increase by at least 5% per Gy (p<0.05), were further evaluated for clinical relevance using a custom graphical interface. Results: A total of 57 individual and combined structures and 115 outcomes were queried, resulting in over 6,500 combinations for analysis. Of these, 528 combinations met the 5%/Gy requirement, with further manual inspection revealing a number of reasonable models based on either reported literature or proximity between neighboring OAR. The data mining algorithm confirmed the following well-known toxicity/outcome relationships: dysphagia/larynx, voice changes/larynx, esophagitis/esophagus, xerostomia/combined parotids, and mucositis/oral mucosa. Other notable relationships included dysphagia/pharyngeal constrictors, nausea/brainstem, nausea/spinal cord, weight-loss/mandible, and weight-loss/combined parotids. Conclusion: Our database platform has enabled large-scale analysis of dose-outcome relationships. The current data

  14. Modulation of insulin dose titration using a hypoglycaemia-sensitive algorithm: insulin glargine versus neutral protamine Hagedorn insulin in insulin-naïve people with type 2 diabetes

    PubMed Central

    Home, P D; Bolli, G B; Mathieu, C; Deerochanawong, C; Landgraf, W; Candelas, C; Pilorget, V; Dain, M-P; Riddle, M C

    2015-01-01

    Aims To examine whether insulin glargine can lead to better control of glycated haemoglobin (HbA1c) than that achieved by neutral protamine Hagedorn (NPH) insulin, using a protocol designed to limit nocturnal hypoglycaemia. Methods The present study, the Least One Oral Antidiabetic Drug Treatment (LANCELOT) Study, was a 36-week, randomized, open-label, parallel-arm study conducted in Europe, Asia, the Middle East and South America. Participants were randomized (1 : 1) to begin glargine or NPH, on background of metformin with glimepiride. Weekly insulin titration aimed to achieve median prebreakfast and nocturnal plasma glucose levels ≤5.5 mmol/l, while limiting values ≤4.4 mmol/l. Results The efficacy population (n = 701) had a mean age of 57 years, a mean body mass index of 29.8 kg/m2, a mean duration of diabetes of 9.2 years and a mean HbA1c level of 8.2% (66 mmol/mol). At treatment end, HbA1c values and the proportion of participants with HbA1c <7.0 % (<53 mmol/mol) were not significantly different for glargine [7.1 % (54 mmol/mol) and 50.3%] versus NPH [7.2 % (55 mmol/mol) and 44.3%]. The rate of symptomatic nocturnal hypoglycaemia, confirmed by plasma glucose ≤3.9 or ≤3.1 mmol/l, was 29 and 48% less with glargine than with NPH insulin. Other outcomes were similar between the groups. Conclusion Insulin glargine was not superior to NPH insulin in improving glycaemic control. The insulin dosing algorithm was not sufficient to equalize nocturnal hypoglycaemia between the two insulins. This study confirms, in a globally heterogeneous population, the reduction achieved in nocturnal hypoglycaemia while attaining good glycaemic control with insulin glargine compared with NPH, even when titrating basal insulin to prevent nocturnal hypoglycaemia rather than treating according to normal fasting glucose levels. PMID:24957785

  15. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  16. A dose error evaluation study for 4D dose calculations

    NASA Astrophysics Data System (ADS)

    Milz, Stefan; Wilkens, Jan J.; Ullrich, Wolfgang

    2014-10-01

    Previous studies have shown that respiration induced motion is not negligible for Stereotactic Body Radiation Therapy. The intrafractional breathing induced motion influences the delivered dose distribution on the underlying patient geometry such as the lung or the abdomen. If a static geometry is used, a planning process for these indications does not represent the entire dynamic process. The quality of a full 4D dose calculation approach depends on the dose coordinate transformation process between deformable geometries. This article provides an evaluation study that introduces an advanced method to verify the quality of numerical dose transformation generated by four different algorithms. The used transformation metric value is based on the deviation of the dose mass histogram (DMH) and the mean dose throughout dose transformation. The study compares the results of four algorithms. In general, two elementary approaches are used: dose mapping and energy transformation. Dose interpolation (DIM) and an advanced concept, so called divergent dose mapping model (dDMM), are used for dose mapping. The algorithms are compared to the basic energy transformation model (bETM) and the energy mass congruent mapping (EMCM). For evaluation 900 small sample regions of interest (ROI) are generated inside an exemplary lung geometry (4DCT). A homogeneous fluence distribution is assumed for dose calculation inside the ROIs. The dose transformations are performed with the four different algorithms. The study investigates the DMH-metric and the mean dose metric for different scenarios (voxel sizes: 8 mm, 4 mm, 2 mm, 1 mm 9 different breathing phases). dDMM achieves the best transformation accuracy in all measured test cases with 3-5% lower errors than the other models. The results of dDMM are reasonable and most efficient in this study, although the model is simple and easy to implement. The EMCM model also achieved suitable results, but the approach requires a more complex

  17. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  18. Integral-transport-based deterministic brachytherapy dose calculations

    NASA Astrophysics Data System (ADS)

    Zhou, Chuanyu; Inanc, Feyzi

    2003-01-01

    We developed a transport-equation-based deterministic algorithm for computing three-dimensional brachytherapy dose distributions. The deterministic algorithm has been based on the integral transport equation. The algorithm provided us with the capability of computing dose distributions for multiple isotropic point and/or volumetric sources in a homogenous/heterogeneous medium. The algorithm results have been benchmarked against the results from the literature and MCNP results for isotropic point sources and volumetric sources.

  19. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  20. Optimization of the double dosimetry algorithm for interventional cardiologists

    NASA Astrophysics Data System (ADS)

    Chumak, Vadim; Morgun, Artem; Bakhanova, Elena; Voloskiy, Vitalii; Borodynchik, Elena

    2014-11-01

    A double dosimetry method is recommended in interventional cardiology (IC) to assess occupational exposure; yet currently there is no common and universal algorithm for effective dose estimation. In this work, flexible and adaptive algorithm building methodology was developed and some specific algorithm applicable for typical irradiation conditions of IC procedures was obtained. It was shown that the obtained algorithm agrees well with experimental measurements and is less conservative compared to other known algorithms.

  1. A framework for analytical estimation of patient-specific CT dose

    NASA Astrophysics Data System (ADS)

    Youn, Hanbean; Kim, Jin Woo; Jeon, Hosang; Nam, Jiho; Yun, Seungman; Cho, Min Kook; Kim, Ho Kyung

    2016-03-01

    The authors introduce an algorithm to estimate the spatial dose distributions in computed tomography (CT) images. The algorithm calculates dose distributions due to the primary and scattered photons separately. The algorithm only requires the CT data set that includes the patient CT images and the scanner acquisition parameters. Otherwise the scanner acquisition parameters are extracted from the CT images. Using the developed algorithm, the dose distributions for head and chest phantoms are computed and the results show the excellent agreements with the dose distributions obtained using a commercial Monte Carlo code. The developed algorithm can be applied to a patient-specific CT dose estimation based on the CT data.

  2. Dose reconstruction for intensity-modulated radiation therapy using a non-iterative method and portal dose image

    NASA Astrophysics Data System (ADS)

    Yeo, Inhwan Jason; Jung, Jae Won; Chew, Meng; Kim, Jong Oh; Wang, Brian; Di Biase, Steven; Zhu, Yunping; Lee, Dohyung

    2009-09-01

    A straightforward and accurate method was developed to verify the delivery of intensity-modulated radiation therapy (IMRT) and to reconstruct the dose in a patient. The method is based on a computational algorithm that linearly describes the physical relationship between beamlets and dose-scoring voxels in a patient and the dose image from an electronic portal imaging device (EPID). The relationship is expressed in the form of dose response functions (responses) that are quantified using Monte Carlo (MC) particle transport techniques. From the dose information measured by the EPID the received patient dose is reconstructed by inversely solving the algorithm. The unique and novel non-iterative feature of this algorithm sets it apart from many existing dose reconstruction methods in the literature. This study presents the algorithm in detail and validates it experimentally for open and IMRT fields. Responses were first calculated for each beamlet of the selected fields by MC simulation. In-phantom and exit film dosimetry were performed on a flat phantom. Using the calculated responses and the algorithm, the exit film dose was used to inversely reconstruct the in-phantom dose, which was then compared with the measured in-phantom dose. The dose comparison in the phantom for all irradiated fields showed a pass rate of higher than 90% dose points given the criteria of dose difference of 3% and distance to agreement of 3 mm.

  3. Assessment of image quality and radiation dose of prospectively ECG-triggered adaptive dual-source coronary computed tomography angiography (cCTA) with arrhythmia rejection algorithm in systole versus diastole: a retrospective cohort study.

    PubMed

    Lee, Ashley M; Beaudoin, Jonathan; Engel, Leif-Christopher; Sidhu, Manavjot S; Abbara, Suhny; Brady, Thomas J; Hoffmann, Udo; Ghoshhajra, Brian B

    2013-08-01

    In this study, we sought to evaluate the image quality and effective radiation dose of prospectively ECG-triggered adaptive systolic (PTA-systolic) dual-source CTA versus prospectively triggered adaptive diastolic (PTA-diastolic) dual-source CTA in patients of unselected heart rate and rhythm. This retrospective cohort study consisted of 41 PTA-systolic and 41 matched PTA-diastolic CTA patients whom underwent clinically indicated 128-slice dual source CTA between December 2010 to June 2012. Image quality and motion artifact score (both on a Likert scale 1-4 with 4 being the best), effective dose, and CTDIvol were compared. The effect of heart rate (HR) and heart rate variability [HRV] on image motion artifact score and CTDIvol was analyzed with Pearson's correlation coefficient. All 82 exams were considered diagnostic with 0 non-diagnostic segments. PTA-systolic CTA patients had a higher maximum HR, wider HRV, were less likely to be in sinus rhythm, and received less beta-blocker vs. PTA-diastolic CTA patients. No difference in effective dose was observed (PTA-systolic vs. PTA-diastolic CTA: 2.9 vs. 2.2 mSv, p = 0.26). Image quality score (3.3 vs. 3.5, p < 0.05) and motion artifact score (3.5 vs. 3.8, p < 0.05) were lower in PTA-systolic CTAs than in PTA-diastolic CTAs. For PTA-systolic CTAs, an increase in HR was not associated with a negative impact on motion artifact score nor CTDIvol. For PTA-diastolic CTA, an increase in HR was associated with increased motion artifacts and CTDIvol. HRV demonstrated no correlation with motion artifact and CTDIvol for both PTA-systolic and PTA-diastolic CTAs. In conclusion, both PTA-diastolic CTA and PTA-systolic CTA yielded diagnostic examinations at unselected heart rates and rhythms with similar effective radiation, but PTA-systolic CTA resulted in more consistent radiation exposure and image quality across a wide range of rates and rhythms. PMID:23526082

  4. Dose audit failures and dose augmentation

    NASA Astrophysics Data System (ADS)

    Herring, C.

    1999-01-01

    Standards EN 552 and ISO 11137, covering radiation sterilization, are technically equivalent in their requirements for the selection of the sterilization dose. Dose Setting Methods 1 and 2 described in Annex B of ISO 11137 can be used to meet these requirements for the selection of the sterilization dose. Both dose setting methods require a dose audit every 3 months to determine the continued validity of the sterilization dose. This paper addresses the subject of dose audit failures and investigations into their cause. It also presents a method to augment the sterilization dose when the number of audit positives exceeds the limits imposed by ISO 11137.

  5. Calculation of the biological effective dose for piecewise defined dose-rate fits

    SciTech Connect

    Hobbs, Robert F.; Sgouros, George

    2009-03-15

    An algorithmic solution to the biological effective dose (BED) calculation from the Lea-Catcheside formula for a piecewise defined function is presented. Data from patients treated for metastatic thyroid cancer were used to illustrate the solution. The Lea-Catcheside formula for the G-factor of the BED is integrated numerically using a large number of small trapezoidal fits to each integral. The algorithmically calculated BED is compatible with an analytic calculation for a similarly valued exponentially fitted dose-rate plot and is the only resolution for piecewise defined dose-rate functions.

  6. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  7. Low-dose computed tomography image restoration using previous normal-dose scan

    SciTech Connect

    Ma, Jianhua; Huang, Jing; Feng, Qianjin; Zhang, Hua; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2011-10-15

    Purpose: In current computed tomography (CT) examinations, the associated x-ray radiation dose is of a significant concern to patients and operators. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) or kVp parameter (or delivering less x-ray energy to the body) as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and the noise would propagate into the CT image if no adequate noise control is applied during image reconstruction. Since a normal-dose high diagnostic CT image scanned previously may be available in some clinical applications, such as CT perfusion imaging and CT angiography (CTA), this paper presents an innovative way to utilize the normal-dose scan as a priori information to induce signal restoration of the current low-dose CT image series. Methods: Unlike conventional local operations on neighboring image voxels, nonlocal means (NLM) algorithm utilizes the redundancy of information across the whole image. This paper adapts the NLM to utilize the redundancy of information in the previous normal-dose scan and further exploits ways to optimize the nonlocal weights for low-dose image restoration in the NLM framework. The resulting algorithm is called the previous normal-dose scan induced nonlocal means (ndiNLM). Because of the optimized nature of nonlocal weights calculation, the ndiNLM algorithm does not depend heavily on image registration between the current low-dose and the previous normal-dose CT scans. Furthermore, the smoothing parameter involved in the ndiNLM algorithm can be adaptively estimated based on the image noise relationship between the current low-dose and the previous normal-dose scanning protocols. Results: Qualitative and quantitative evaluations were carried out on a physical phantom as well as clinical abdominal and brain perfusion CT scans in terms of accuracy and resolution properties. The gain by the use

  8. Stability of Bareiss algorithm

    NASA Astrophysics Data System (ADS)

    Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.

    1991-12-01

    In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.

  9. SU-E-T-280: Reconstructed Rectal Wall Dose Map-Based Verification of Rectal Dose Sparing Effect According to Rectum Definition Methods and Dose Perturbation by Air Cavity in Endo-Rectal Balloon

    SciTech Connect

    Park, J; Park, H; Lee, J; Kang, S; Lee, M; Suh, T; Lee, B

    2014-06-01

    Purpose: Dosimetric effect and discrepancy according to the rectum definition methods and dose perturbation by air cavity in an endo-rectal balloon (ERB) were verified using rectal-wall (Rwall) dose maps considering systematic errors in dose optimization and calculation accuracy in intensity-modulated radiation treatment (IMRT) for prostate cancer patients. Methods: When the inflated ERB having average diameter of 4.5 cm and air volume of 100 cc is used for patient, Rwall doses were predicted by pencil-beam convolution (PBC), anisotropic analytic algorithm (AAA), and AcurosXB (AXB) with material assignment function. The errors of dose optimization and calculation by separating air cavity from the whole rectum (Rwhole) were verified with measured rectal doses. The Rwall doses affected by the dose perturbation of air cavity were evaluated using a featured rectal phantom allowing insert of rolled-up gafchromic films and glass rod detectors placed along the rectum perimeter. Inner and outer Rwall doses were verified with reconstructed predicted rectal wall dose maps. Dose errors and extent at dose levels were evaluated with estimated rectal toxicity. Results: While AXB showed insignificant difference of target dose coverage, Rwall doses underestimated by up to 20% in dose optimization for the Rwhole than Rwall at all dose range except for the maximum dose. As dose optimization for Rwall was applied, the Rwall doses presented dose error less than 3% between dose calculation algorithm except for overestimation of maximum rectal dose up to 5% in PBC. Dose optimization for Rwhole caused dose difference of Rwall especially at intermediate doses. Conclusion: Dose optimization for Rwall could be suggested for more accurate prediction of rectal wall dose prediction and dose perturbation effect by air cavity in IMRT for prostate cancer. This research was supported by the Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea

  10. In vivo TLD dose measurements in catheter-based high-dose-rate brachytherapy.

    PubMed

    Adlienė, Diana; Jakštas, Karolis; Urbonavičius, Benas Gabrielis

    2015-07-01

    Routine in vivo dosimetry is well established in external beam radiotherapy; however, it is restricted mainly to detection of gross errors in high-dose-rate (HDR) brachytherapy due to complicated measurements in the field of steep dose gradients in the vicinity of radioactive source and high uncertainties. The results of in vivo dose measurements using TLD 100 mini rods and TLD 'pin worms' in catheter-based HDR brachytherapy are provided in this paper alongside with their comparison with corresponding dose values obtained using calculation algorithm of the treatment planning system. Possibility to perform independent verification of treatment delivery in HDR brachytherapy using TLDs is discussed. PMID:25809111

  11. Direct dose mapping versus energy/mass transfer mapping for 4D dose accumulation: fundamental differences and dosimetric consequences

    NASA Astrophysics Data System (ADS)

    Li, Haisen S.; Zhong, Hualiang; Kim, Jinkoo; Glide-Hurst, Carri; Gulam, Misbah; Nurushev, Teamour S.; Chetty, Indrin J.

    2014-01-01

    The direct dose mapping (DDM) and energy/mass transfer (EMT) mapping are two essential algorithms for accumulating the dose from different anatomic phases to the reference phase when there is organ motion or tumor/tissue deformation during the delivery of radiation therapy. DDM is based on interpolation of the dose values from one dose grid to another and thus lacks rigor in defining the dose when there are multiple dose values mapped to one dose voxel in the reference phase due to tissue/tumor deformation. On the other hand, EMT counts the total energy and mass transferred to each voxel in the reference phase and calculates the dose by dividing the energy by mass. Therefore it is based on fundamentally sound physics principles. In this study, we implemented the two algorithms and integrated them within the Eclipse treatment planning system. We then compared the clinical dosimetric difference between the two algorithms for ten lung cancer patients receiving stereotactic radiosurgery treatment, by accumulating the delivered dose to the end-of-exhale (EE) phase. Specifically, the respiratory period was divided into ten phases and the dose to each phase was calculated and mapped to the EE phase and then accumulated. The displacement vector field generated by Demons-based registration of the source and reference images was used to transfer the dose and energy. The DDM and EMT algorithms produced noticeably different cumulative dose in the regions with sharp mass density variations and/or high dose gradients. For the planning target volume (PTV) and internal target volume (ITV) minimum dose, the difference was up to 11% and 4% respectively. This suggests that DDM might not be adequate for obtaining an accurate dose distribution of the cumulative plan, instead, EMT should be considered.

  12. Monte Carlo dose computation for IMRT optimization*

    NASA Astrophysics Data System (ADS)

    Laub, W.; Alber, M.; Birkner, M.; Nüsslin, F.

    2000-07-01

    A method which combines the accuracy of Monte Carlo dose calculation with a finite size pencil-beam based intensity modulation optimization is presented. The pencil-beam algorithm is employed to compute the fluence element updates for a converging sequence of Monte Carlo dose distributions. The combination is shown to improve results over the pencil-beam based optimization in a lung tumour case and a head and neck case. Inhomogeneity effects like a broader penumbra and dose build-up regions can be compensated for by intensity modulation.

  13. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  14. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  15. A Monte Carlo based three-dimensional dose reconstruction method derived from portal dose images

    SciTech Connect

    Elmpt, Wouter J. C. van; Nijsten, Sebastiaan M. J. J. G.; Schiffeleers, Robert F. H.; Dekker, Andre L. A. J.; Mijnheer, Ben J.; Lambin, Philippe; Minken, Andre W. H.

    2006-07-15

    The verification of intensity-modulated radiation therapy (IMRT) is necessary for adequate quality control of the treatment. Pretreatment verification may trace the possible differences between the planned dose and the actual dose delivered to the patient. To estimate the impact of differences between planned and delivered photon beams, a three-dimensional (3-D) dose verification method has been developed that reconstructs the dose inside a phantom. The pretreatment procedure is based on portal dose images measured with an electronic portal imaging device (EPID) of the separate beams, without the phantom in the beam and a 3-D dose calculation engine based on the Monte Carlo calculation. Measured gray scale portal images are converted into portal dose images. From these images the lateral scattered dose in the EPID is subtracted and the image is converted into energy fluence. Subsequently, a phase-space distribution is sampled from the energy fluence and a 3-D dose calculation in a phantom is started based on a Monte Carlo dose engine. The reconstruction model is compared to film and ionization chamber measurements for various field sizes. The reconstruction algorithm is also tested for an IMRT plan using 10 MV photons delivered to a phantom and measured using films at several depths in the phantom. Depth dose curves for both 6 and 10 MV photons are reconstructed with a maximum error generally smaller than 1% at depths larger than the buildup region, and smaller than 2% for the off-axis profiles, excluding the penumbra region. The absolute dose values are reconstructed to within 1.5% for square field sizes ranging from 5 to 20 cm width. For the IMRT plan, the dose was reconstructed and compared to the dose distribution with film using the gamma evaluation, with a 3% and 3 mm criterion. 99% of the pixels inside the irradiated field had a gamma value smaller than one. The absolute dose at the isocenter agreed to within 1% with the dose measured with an ionization

  16. Control algorithms for dynamic attenuators

    SciTech Connect

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-06-15

    Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not requirea priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current

  17. SU-D-BRB-07: Lipiodol Impact On Dose Distribution in Liver SBRT After TACE

    SciTech Connect

    Kawahara, D; Ozawa, S; Hioki, K; Suzuki, T; Lin, Y; Okumura, T; Ochi, Y; Nakashima, T; Ohno, Y; Kimura, T; Murakami, Y; Nagata, Y

    2015-06-15

    Purpose: Stereotactic body radiotherapy (SBRT) combining transarterial chemoembolization (TACE) with Lipiodol is expected to improve local control. This study aims to evaluate the impact of Lipiodol on dose distribution by comparing the dosimetric performance of the Acuros XB (AXB) algorithm, anisotropic analytical algorithm (AAA), and Monte Carlo (MC) method using a virtual heterogeneous phantom and a treatment plan for liver SBRT after TACE. Methods: The dose distributions calculated using AAA and AXB algorithm, both in Eclipse (ver. 11; Varian Medical Systems, Palo Alto, CA), and EGSnrc-MC were compared. First, the inhomogeneity correction accuracy of the AXB algorithm and AAA was evaluated by comparing the percent depth dose (PDD) obtained from the algorithms with that from the MC calculations using a virtual inhomogeneity phantom, which included water and Lipiodol. Second, the dose distribution of a liver SBRT patient treatment plan was compared between the calculation algorithms. Results In the virtual phantom, compared with the MC calculations, AAA underestimated the doses just before and in the Lipiodol region by 5.1% and 9.5%, respectively, and overestimated the doses behind the region by 6.0%. Furthermore, compared with the MC calculations, the AXB algorithm underestimated the doses just before and in the Lipiodol region by 4.5% and 10.5%, respectively, and overestimated the doses behind the region by 4.2%. In the SBRT plan, the AAA and AXB algorithm underestimated the maximum doses in the Lipiodol region by 9.0% in comparison with the MC calculations. In clinical cases, the dose enhancement in the Lipiodol region can approximately 10% increases in tumor dose without increase of dose to normal tissue. Conclusion: The MC method demonstrated a larger increase in the dose in the Lipiodol region than the AAA and AXB algorithm. Notably, dose enhancement were observed in the tumor area; this may lead to a clinical benefit.

  18. Intra-voxel heterogeneity influences the dose prescription for dose-painting with radiotherapy: a modelling study

    NASA Astrophysics Data System (ADS)

    F Petit, Steven; Dekker, André L. A. J.; Seigneuric, Renaud; Murrer, Lars; van Riel, Natal A. W.; Nordsmark, Marianne; Overgaard, Jens; Lambin, Philippe; Wouters, Bradly G.

    2009-04-01

    The purpose of this study was to increase the potential of dose redistribution by incorporating estimates of oxygen heterogeneity within imaging voxels for optimal dose determination. Cellular oxygen tension (pO2) distributions were estimated for imaging-size-based voxels by solving oxygen diffusion-consumption equations around capillaries placed at random locations. The linear-quadratic model was used to determine cell survival in the voxels as a function of pO2 and dose. The dose distribution across the tumour was optimized to yield minimal survival after 30 × 2 Gy fractions by redistributing the dose based on differences in oxygen levels. Eppendorf data of a series of 69 tumours were used as a surrogate of what might be expected from oxygen imaging datasets. Dose optimizations were performed both taking into account cellular heterogeneity in oxygenation within voxels and assuming a homogeneous cellular distribution of oxygen. Our simulations show that dose redistribution based on derived cellular oxygen distributions within voxels result in dose distributions that require less total dose to obtain the same degree of cell kill as dose distributions that were optimized with a model that considered voxels as homogeneous with respect to oxygen. Moderately hypoxic tumours are expected to gain most from dose redistribution. Incorporating cellular-based distributions of radiosensitivity into dose-planning algorithms theoretically improves the potential gains from dose redistribution algorithms.

  19. Fluence-convolution broad-beam (FCBB) dose calculation.

    PubMed

    Lu, Weiguo; Chen, Mingli

    2010-12-01

    IMRT optimization requires a fast yet relatively accurate algorithm to calculate the iteration dose with small memory demand. In this paper, we present a dose calculation algorithm that approaches these goals. By decomposing the infinitesimal pencil beam (IPB) kernel into the central axis (CAX) component and lateral spread function (LSF) and taking the beam's eye view (BEV), we established a non-voxel and non-beamlet-based dose calculation formula. Both LSF and CAX are determined by a commissioning procedure using the collapsed-cone convolution/superposition (CCCS) method as the standard dose engine. The proposed dose calculation involves a 2D convolution of a fluence map with LSF followed by ray tracing based on the CAX lookup table with radiological distance and divergence correction, resulting in complexity of O(N(3)) both spatially and temporally. This simple algorithm is orders of magnitude faster than the CCCS method. Without pre-calculation of beamlets, its implementation is also orders of magnitude smaller than the conventional voxel-based beamlet-superposition (VBS) approach. We compared the presented algorithm with the CCCS method using simulated and clinical cases. The agreement was generally within 3% for a homogeneous phantom and 5% for heterogeneous and clinical cases. Combined with the 'adaptive full dose correction', the algorithm is well suitable for calculating the iteration dose during IMRT optimization. PMID:21081826

  20. SCCT guidelines on radiation dose and dose-optimization strategies in cardiovascular CT

    PubMed Central

    Halliburton, Sandra S.; Abbara, Suhny; Chen, Marcus Y.; Gentry, Ralph; Mahesh, Mahadevappa; Raff, Gilbert L.; Shaw, Leslee J.; Hausleiter, Jörg

    2012-01-01

    Over the last few years, computed tomography (CT) has developed into a standard clinical test for a variety of cardiovascular conditions. The emergence of cardiovascular CT during a period of dramatic increase in radiation exposure to the population from medical procedures and heightened concern about the subsequent potential cancer risk has led to intense scrutiny of the radiation burden of this new technique. This has hastened the development and implementation of dose reduction tools and prompted closer monitoring of patient dose. In an effort to aid the cardiovascular CT community in incorporating patient-centered radiation dose optimization and monitoring strategies into standard practice, the Society of Cardiovascular Computed Tomography has produced a guideline document to review available data and provide recommendations regarding interpretation of radiation dose indices and predictors of risk, appropriate use of scanner acquisition modes and settings, development of algorithms for dose optimization, and establishment of procedures for dose monitoring. PMID:21723512

  1. Dose sculpting with generalized equivalent uniform dose

    SciTech Connect

    Wu Qiuwen; Djajaputra, David; Liu, Helen H.; Dong Lei; Mohan, Radhe; Wu, Yan

    2005-05-01

    With intensity-modulated radiotherapy (IMRT), a variety of user-defined dose distribution can be produced using inverse planning. The generalized equivalent uniform dose (gEUD) has been used in IMRT optimization as an alternative objective function to the conventional dose-volume-based criteria. The purpose of this study was to investigate the effectiveness of gEUD optimization to fine tune the dose distributions of IMRT plans. We analyzed the effect of gEUD-based optimization parameters on plan quality. The objective was to determine whether dose distribution to selected structures could be improved using gEUD optimization without adversely altering the doses delivered to other structures, as in sculpting. We hypothesized that by carefully defining gEUD parameters (EUD{sub 0} and n) based on the current dose distributions, the optimization system could be instructed to search for alternative solutions in the neighborhood, and we could maintain the dose distributions for structures already satisfactory and improve dose for structures that need enhancement. We started with an already acceptable IMRT plan optimized with any objective function. The dose distribution was analyzed first. For structures that dose should not be changed, a higher value of n was used and EUD{sub 0} was set slightly higher/lower than the EUD value at the current dose distribution for critical structures/targets. For structures that needed improvement in dose, a higher to medium value of n was used, and EUD{sub 0} was set to the EUD value or slightly lower/higher for the critical structure/target at the current dose distribution. We evaluated this method in one clinical case each of head and neck, lung and prostate cancer. Dose volume histograms, isodose distributions, and relevant tolerance doses for critical structures were used for the assessment. We found that by adjusting gEUD optimization parameters, the dose distribution could be improved with only a few iterations. A larger value of n

  2. Absorbed Dose and Dose Equivalent Calculations for Modeling Effective Dose

    NASA Technical Reports Server (NTRS)

    Welton, Andrew; Lee, Kerry

    2010-01-01

    While in orbit, Astronauts are exposed to a much higher dose of ionizing radiation than when on the ground. It is important to model how shielding designs on spacecraft reduce radiation effective dose pre-flight, and determine whether or not a danger to humans is presented. However, in order to calculate effective dose, dose equivalent calculations are needed. Dose equivalent takes into account an absorbed dose of radiation and the biological effectiveness of ionizing radiation. This is important in preventing long-term, stochastic radiation effects in humans spending time in space. Monte carlo simulations run with the particle transport code FLUKA, give absorbed and equivalent dose data for relevant shielding. The shielding geometry used in the dose calculations is a layered slab design, consisting of aluminum, polyethylene, and water. Water is used to simulate the soft tissues that compose the human body. The results obtained will provide information on how the shielding performs with many thicknesses of each material in the slab. This allows them to be directly applicable to modern spacecraft shielding geometries.

  3. Assessment of effective dose and dose to the lens of the eye for the interventional cardiologist.

    PubMed

    Lie, Øydis Østbye; Paulsen, Gudrun Uthaug; Wøhni, Tor

    2008-01-01

    This study investigates the relationship between personal dosemeter (PD) reading, effective dose and dose to the lens of the eye for interventional cardiologists in Norway. Doses were recorded with thermoluminescence dosemeters (TLD-100) for 14 cardiologists, and the effective doses were estimated using the Niklason algorithm. The procedures performed were coronary angiography and percutaneous coronary intervention, and all the hospitals (eight) in Norway, which are performing these procedures, were included in the study. Effective dose per unit dose-area product varied by a factor of 5, and effective dose relative to PD reading varied between 4 and 39%. Eye lens doses ranged from 39 to 138% of the dosemeter reading. On the basis of an estimated annual workload of 900 procedures, the annual effective doses ranged from 1 to 11 mSv. The estimated annual doses to the unprotected eye ranged from 9 to 210 mSv. According to the ICRP dose limits, the results indicate that the eye could be the limiting organ. PMID:19056809

  4. Benchmark Dose Modeling

    EPA Science Inventory

    Finite doses are employed in experimental toxicology studies. Under the traditional methodology, the point of departure (POD) value for low dose extrapolation is identified as one of these doses. Dose spacing necessarily precludes a more accurate description of the POD value. ...

  5. A new reconstruction algorithm for Radon data

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Tischenko, O.; Hoeschen, C.

    2006-03-01

    A new reconstruction algorithm for Radon data is introduced. We call the new algorithm OPED as it is based on Orthogonal Polynomial Expansion on the Disk. OPED is fundamentally different from the filtered back projection (FBP) method. It allows one to use fan beam geometry directly without any additional procedures such as interpolation or rebinning. It reconstructs high degree polynomials exactly and works for smooth functions without the assumption that functions are band- limited. Our initial tests indicate that the algorithm is stable, provides high resolution images, and has a small global error. Working with the geometry specified by the algorithm and a new mask, OPED could also lead to a reconstruction method that works with reduced x-ray dose (see the paper by Tischenko et al in these proceedings).

  6. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  7. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  8. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  9. Proton dose calculation based on in-air fluence measurements.

    PubMed

    Schaffner, Barbara

    2008-03-21

    Proton dose calculation algorithms--as well as photon and electron algorithms--are usually based on configuration measurements taken in a water phantom. The exceptions to this are proton dose calculation algorithms for modulated scanning beams. There, it is usual to measure the spot profiles in air. We use the concept of in-air configuration measurements also for scattering and uniform scanning (wobbling) proton delivery techniques. The dose calculation includes a separate step for the calculation of the in-air fluence distribution per energy layer. The in-air fluence calculation is specific to the technique and-to a lesser extent-design of the treatment machine. The actual dose calculation uses the in-air fluence as input and is generic for all proton machine designs and techniques. PMID:18367787

  10. Early dose assessment following severe radiation accidents

    SciTech Connect

    Goans, R.E.; Holloway, E.C.; Berger, M.E.; Ricks, R.C.

    1997-04-01

    Early treatment of victims of high level acute whole-body x-ray or gamma exposure has been shown to improve their likelihood of survival. However, in such cases, both the magnitude of the exposure and the dosimetry profile(s) of the victim(s) are often not known in detail for days to weeks. A simple dose-prediction algorithm based on lymphocyte kinetics as documented in prior radiation accidents is presented here. This algorithm provides an estimate of dose within the first 8 h following an acute whole-body exposure. Early lymphocyte depletion kinetics after a severe radiation accident follow a single exponential, L(t) = L{sub o}e{sup -k(D)t}, where k(D) is a rate constant, dependent primarily on the average dose, D. Within the first 8 h post-accident, K(D) may be calculated utilizing serial lymphocyte counts. Data from the REAC/TS Radiation Accident Registry were used to develop a dose-prediction algorithm from 43 gamma exposure cases where both lymphocyte kinetics and dose reconstruction were felt to be reasonably reliable. The inverse relationship D(K) may be molded by a simple two parameter curve of the form D = a/(1 + b/K) in the range 0 {le} D {le} 15 Gy, with fitting parameters (mean {+-} SD): a = 13.6 {+-} 1.7 Gy, and b = 1.0 {+-} 0.20 d{sup -1}. Dose estimated in this manner is intended to serve only as a first approximation to guide initial medical management. 31 refs., 4 figs., 2 tabs.

  11. High-dose-rate prostate brachytherapy inverse planning on dose-volume criteria by simulated annealing

    NASA Astrophysics Data System (ADS)

    Deist, T. M.; Gorissen, B. L.

    2016-02-01

    High-dose-rate brachytherapy is a tumor treatment method where a highly radioactive source is brought in close proximity to the tumor. In this paper we develop a simulated annealing algorithm to optimize the dwell times at preselected dwell positions to maximize tumor coverage under dose-volume constraints on the organs at risk. Compared to existing algorithms, our algorithm has advantages in terms of speed and objective value and does not require an expensive general purpose solver. Its success mainly depends on exploiting the efficiency of matrix multiplication and a careful selection of the neighboring states. In this paper we outline its details and make an in-depth comparison with existing methods using real patient data.

  12. High-dose-rate prostate brachytherapy inverse planning on dose-volume criteria by simulated annealing.

    PubMed

    Deist, T M; Gorissen, B L

    2016-02-01

    High-dose-rate brachytherapy is a tumor treatment method where a highly radioactive source is brought in close proximity to the tumor. In this paper we develop a simulated annealing algorithm to optimize the dwell times at preselected dwell positions to maximize tumor coverage under dose-volume constraints on the organs at risk. Compared to existing algorithms, our algorithm has advantages in terms of speed and objective value and does not require an expensive general purpose solver. Its success mainly depends on exploiting the efficiency of matrix multiplication and a careful selection of the neighboring states. In this paper we outline its details and make an in-depth comparison with existing methods using real patient data. PMID:26760757

  13. An expanded pharmacogenomics warfarin dosing table with utility in generalised dosing guidance.

    PubMed

    Shahabi, Payman; Scheinfeldt, Laura B; Lynch, Daniel E; Schmidlen, Tara J; Perreault, Sylvie; Keller, Margaret A; Kasper, Rachel; Wawak, Lisa; Jarvis, Joseph P; Gerry, Norman P; Gordon, Erynn S; Christman, Michael F; Dubé, Marie-Pierre; Gharani, Neda

    2016-08-01

    Pharmacogenomics (PGx) guided warfarin dosing, using a comprehensive dosing algorithm, is expected to improve dose optimisation and lower the risk of adverse drug reactions. As a complementary tool, a simple genotype-dosing table, such as in the US Food and Drug Administration (FDA) Coumadin drug label, may be utilised for general risk assessment of likely over- or under-anticoagulation on a standard dose of warfarin. This tool may be used as part of the clinical decision support for the interpretation of genetic data, serving as a first step in the anticoagulation therapy decision making process. Here we used a publicly available warfarin dosing calculator (www.warfarindosing.org) to create an expanded gene-based warfarin dosing table, the CPMC-WD table that includes nine genetic variants in CYP2C9, VKORC1, and CYP4F2. Using two datasets, a European American cohort (EUA, n=73) and the Quebec Warfarin Cohort (QWC, n=769), we show that the CPMC-WD table more accurately predicts therapeutic dose than the FDA table (51 % vs 33 %, respectively, in the EUA, McNemar's two-sided p=0.02; 52 % vs 37 % in the QWC, p<1×10(-6)). It also outperforms both the standard of care 5 mg/day dosing (51 % vs 34 % in the EUA, p=0.04; 52 % vs 31 % in the QWC, p<1×10(-6)) as well as a clinical-only algorithm (51 % vs 38 % in the EUA, trend p=0.11; 52 % vs 45 % in the QWC, p=0.003). This table offers a valuable update to the PGx dosing guideline in the drug label. PMID:27121899

  14. Comparison of computed tomography dose reporting software.

    PubMed

    Abdullah, A; Sun, Z; Pongnapang, N; Ng, K-H

    2012-08-01

    Computed tomography (CT) dose reporting software facilitates the estimation of doses to patients undergoing CT examinations. In this study, comparison of three software packages, i.e. CT-Expo (version 1.5, Medizinische Hochschule, Hannover, Germany), ImPACT CT Patients Dosimetry Calculator (version 0.99×, Imaging Performance Assessment on Computed Tomography, www.impactscan.org) and WinDose (version 2.1a, Wellhofer Dosimetry, Schwarzenbruck, Germany), has been made in terms of their calculation algorithm and the results of calculated doses. Estimations were performed for head, chest, abdominal and pelvic examinations based on the protocols recommended by European guidelines using single-slice CT (SSCT) (Siemens Somatom Plus 4, Erlangen, Germany) and multi-slice CT (MSCT) (Siemens Sensation 16, Erlangen, Germany) for software-based female and male phantoms. The results showed that there are some differences in final dose reporting provided by these software packages. There are deviations of effective doses produced by these software packages. Percentages of coefficient of variance range from 3.3 to 23.4 % in SSCT and from 10.6 to 43.8 % in MSCT. It is important that researchers state the name of the software that is used to estimate the various CT dose quantities. Users must also understand the equivalent terminologies between the information obtained from the CT console and the software packages in order to use the software correctly. PMID:22155753

  15. Semioptimal practicable algorithmic cooling

    SciTech Connect

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-15

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  16. Use of effective dose.

    PubMed

    Harrison, J D; Balonov, M; Martin, C J; Ortiz Lopez, P; Menzel, H-G; Simmonds, J R; Smith-Bindman, R; Wakeford, R

    2016-06-01

    International Commission on Radiological Protection (ICRP) Publication 103 provided a detailed explanation of the purpose and use of effective dose and equivalent dose to individual organs and tissues. Effective dose has proven to be a valuable and robust quantity for use in the implementation of protection principles. However, questions have arisen regarding practical applications, and a Task Group has been set up to consider issues of concern. This paper focusses on two key proposals developed by the Task Group that are under consideration by ICRP: (1) confusion will be avoided if equivalent dose is no longer used as a protection quantity, but regarded as an intermediate step in the calculation of effective dose. It would be more appropriate for limits for the avoidance of deterministic effects to the hands and feet, lens of the eye, and skin, to be set in terms of the quantity, absorbed dose (Gy) rather than equivalent dose (Sv). (2) Effective dose is in widespread use in medical practice as a measure of risk, thereby going beyond its intended purpose. While doses incurred at low levels of exposure may be measured or assessed with reasonable reliability, health effects have not been demonstrated reliably at such levels but are inferred. However, bearing in mind the uncertainties associated with risk projection to low doses or low dose rates, it may be considered reasonable to use effective dose as a rough indicator of possible risk, with the additional consideration of variation in risk with age, sex and population group. PMID:26980800

  17. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  18. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  19. Tissue Heterogeneity in IMRT Dose Calculation for Lung Cancer

    SciTech Connect

    Pasciuti, Katia; Iaccarino, Giuseppe; Strigari, Lidia; Malatesta, Tiziana; Benassi, Marcello; Di Nallo, Anna Maria; Mirri, Alessandra; Pinzi, Valentina; Landoni, Valeria

    2011-07-01

    The aim of this study was to evaluate the differences in accuracy of dose calculation between 3 commonly used algorithms, the Pencil Beam algorithm (PB), the Anisotropic Analytical Algorithm (AAA), and the Collapsed Cone Convolution Superposition (CCCS) for intensity-modulated radiation therapy (IMRT). The 2D dose distributions obtained with the 3 algorithms were compared on each CT slice pixel by pixel, using the MATLAB code (The MathWorks, Natick, MA) and the agreement was assessed with the {gamma} function. The effect of the differences on dose-volume histograms (DVHs), tumor control, and normal tissue complication probability (TCP and NTCP) were also evaluated, and its significance was quantified by using a nonparametric test. In general PB generates regions of over-dosage both in the lung and in the tumor area. These differences are not always in DVH of the lung, although the Wilcoxon test indicated significant differences in 2 of 4 patients. Disagreement in the lung region was also found when the {Gamma} analysis was performed. The effect on TCP is less important than for NTCP because of the slope of the curve at the level of the dose of interest. The effect of dose calculation inaccuracy is patient-dependent and strongly related to beam geometry and to the localization of the tumor. When multiple intensity-modulated beams are used, the effect of the presence of the heterogeneity on dose distribution may not always be easily predictable.

  20. Tissue heterogeneity in IMRT dose calculation for lung cancer.

    PubMed

    Pasciuti, Katia; Iaccarino, Giuseppe; Strigari, Lidia; Malatesta, Tiziana; Benassi, Marcello; Di Nallo, Anna Maria; Mirri, Alessandra; Pinzi, Valentina; Landoni, Valeria

    2011-01-01

    The aim of this study was to evaluate the differences in accuracy of dose calculation between 3 commonly used algorithms, the Pencil Beam algorithm (PB), the Anisotropic Analytical Algorithm (AAA), and the Collapsed Cone Convolution Superposition (CCCS) for intensity-modulated radiation therapy (IMRT). The 2D dose distributions obtained with the 3 algorithms were compared on each CT slice pixel by pixel, using the MATLAB code (The MathWorks, Natick, MA) and the agreement was assessed with the γ function. The effect of the differences on dose-volume histograms (DVHs), tumor control, and normal tissue complication probability (TCP and NTCP) were also evaluated, and its significance was quantified by using a nonparametric test. In general PB generates regions of over-dosage both in the lung and in the tumor area. These differences are not always in DVH of the lung, although the Wilcoxon test indicated significant differences in 2 of 4 patients. Disagreement in the lung region was also found when the Γ analysis was performed. The effect on TCP is less important than for NTCP because of the slope of the curve at the level of the dose of interest. The effect of dose calculation inaccuracy is patient-dependent and strongly related to beam geometry and to the localization of the tumor. When multiple intensity-modulated beams are used, the effect of the presence of the heterogeneity on dose distribution may not always be easily predictable. PMID:20970989

  1. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  2. Algorithms for optimizing CT fluence control

    NASA Astrophysics Data System (ADS)

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-03-01

    The ability to customize the incident x-ray fluence in CT via beam-shaping filters or mA modulation is known to improve image quality and/or reduce radiation dose. Previous work has shown that complete control of x-ray fluence (ray-by-ray fluence modulation) would further improve dose efficiency. While complete control of fluence is not currently possible, emerging concepts such as dynamic attenuators and inverse-geometry CT allow nearly complete control to be realized. Optimally using ray-by-ray fluence modulation requires solving a very high-dimensional optimization problem. Most optimization techniques fail or only provide approximate solutions. We present efficient algorithms for minimizing mean or peak variance given a fixed dose limit. The reductions in variance can easily be translated to reduction in dose, if the original variance met image quality requirements. For mean variance, a closed form solution is derived. The peak variance problem is recast as iterated, weighted mean variance minimization, and at each iteration it is possible to bound the distance to the optimal solution. We apply our algorithms in simulations of scans of the thorax and abdomen. Peak variance reductions of 45% and 65% are demonstrated in the abdomen and thorax, respectively, compared to a bowtie filter alone. Mean variance shows smaller gains (about 15%).

  3. Parallel scheduling algorithms

    SciTech Connect

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  4. Developmental Algorithms Have Meaning!

    ERIC Educational Resources Information Center

    Green, John

    1997-01-01

    Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…

  5. Early dose assessment following severe radiation accidents

    SciTech Connect

    Goans, R.E.; Holloway, E.C.

    1996-06-01

    Prompt and aggressive treatment of victims to high level whole-body gamma exposure has been shown to improve their likelihood of survival. However, in such cases, both the magnitude of the accident and the dosimetry profile(s) of the victim(s) are often not known in detail for days to weeks. Medical intervention could therefore be delayed after a major accident because of uncertainties in the initial dose estimate. A simple dose-prediction algorithm based on lymphocyte kinetics as documented in prior radiation accidents is presented here. This algorithm provides an estimate of marrow dose within the first 12-18 h following an acute whole-body gamma exposure. Early lymphocyte depletion curves post-accident follow a single exponential, L(t) = L{sub o}e{sup -k(D)t}, where L{sub o} is the pre- accident lymphocyte count and k(D) is a rate constant, dependent on the average dose, D. Within the first 12-18 h post-accident, K(D) may be calculated utilizing serial lymphocyte counts. Data from the REAC/TS Accident Registry were used to develop a dose prediction algorithm from 43 gamma exposure cases where both lymphocyte kinetics and dose reconstruction were felt to be reasonably reliable. The relationship D(K) is shown to follow a logistic dose response curve of the form D = a/[1 + (K/b){sup c}] in the range 0 {le} D {le} 15 Gy. The fitting parameters (mean {+-} SD) are found to be a = 21.5 {+-} 5.8 Gy, b = 1.75 {+-} 0.99 d{sup -1}, and c = -0.98 {+-} 0.14, respectively. The coefficient of determination r{sup 2} for the fit is 0.90 with an F-value of 174.7. Dose estimated in this manner is intended to serve only as a first approximation to guide initial medical-management. The treatment regimen may then be modified as needed after more exact dosimetry has become available.

  6. Verification of four-dimensional photon dose calculations.

    PubMed

    Vinogradskiy, Yevgeniy Y; Balter, Peter; Followill, David S; Alvarez, Paola E; White, R Allen; Starkschall, George

    2009-08-01

    Recent work in the area of thoracic treatment planning has been focused on trying to explicitly incorporate patient-specific organ motion in the calculation of dose. Four-dimensional (4D) dose calculation algorithms have been developed and incorporated in a research version of a commercial treatment planning system (Pinnacle3, Philips Medical Systems, Milpitas, CA). Before these 4D dose calculations can be used clinically, it is necessary to verify their accuracy with measurements. The primary purpose of this study therefore was to evaluate and validate the accuracy of a 4D dose calculation algorithm with phantom measurements. A secondary objective was to determine whether the performance of the 4D dose calculation algorithm varied between different motion patterns and treatment plans. Measurements were made using two phantoms: A rigid moving phantom and a deformable phantom. The rigid moving phantom consisted of an anthropomorphic thoracic phantom that rested on a programmable motion platform. The deformable phantom used the same anthropomorphic thoracic phantom with a deformable insert for one of the lungs. Two motion patterns were investigated for each phantom: A sinusoidal motion pattern and an irregular motion pattern extracted from a patient breathing profile. A single-beam plan, a multiple-beam plan, and an intensity-modulated radiation therapy plan were created. Doses were calculated in the treatment planning system using the 4D dose calculation algorithm. Then each plan was delivered to the phantoms and delivered doses were measured using thermoluminescent dosimeters (TLDs) and film. The measured doses were compared to the 4D-calculated doses using a measured-to-calculated TLD ratio and a gamma analysis. A relevant passing criteria (3% for the TLD and 5% /3 mm for the gamma metric) was applied to determine if the 4D dose calculations were accurate to within clinical standards. All the TLD measurements in both phantoms satisfied the passing criteria

  7. Neutron dose equivalent meter

    DOEpatents

    Olsher, Richard H.; Hsu, Hsiao-Hua; Casson, William H.; Vasilik, Dennis G.; Kleck, Jeffrey H.; Beverding, Anthony

    1996-01-01

    A neutron dose equivalent detector for measuring neutron dose capable of accurately responding to neutron energies according to published fluence to dose curves. The neutron dose equivalent meter has an inner sphere of polyethylene, with a middle shell overlying the inner sphere, the middle shell comprising RTV.RTM. silicone (organosiloxane) loaded with boron. An outer shell overlies the middle shell and comprises polyethylene loaded with tungsten. The neutron dose equivalent meter defines a channel through the outer shell, the middle shell, and the inner sphere for accepting a neutron counter tube. The outer shell is loaded with tungsten to provide neutron generation, increasing the neutron dose equivalent meter's response sensitivity above 8 MeV.

  8. Comparison of Snow Mass Estimates from a Prototype Passive Microwave Snow Algorithm, a Revised Algorithm and a Snow Depth Climatology

    NASA Technical Reports Server (NTRS)

    Foster, J. L.; Chang, A. T. C.; Hall, D. K.

    1997-01-01

    While it is recognized that no single snow algorithm is capable of producing accurate global estimates of snow depth, for research purposes it is useful to test an algorithm's performance in different climatic areas in order to see how it responds to a variety of snow conditions. This study is one of the first to develop separate passive microwave snow algorithms for North America and Eurasia by including parameters that consider the effects of variations in forest cover and crystal size on microwave brightness temperature. A new algorithm (GSFC 1996) is compared to a prototype algorithm (Chang et al., 1987) and to a snow depth climatology (SDC), which for this study is considered to be a standard reference or baseline. It is shown that the GSFC 1996 algorithm compares much more favorably to the SDC than does the Chang et al. (1987) algorithm. For example, in North America in February there is a 15% difference between the GSFC 198-96 Algorithm and the SDC, but with the Chang et al. (1987) algorithm the difference is greater than 50%. In Eurasia, also in February, there is only a 1.3% difference between the GSFC 1996 algorithm and the SDC, whereas with the Chang et al. (1987) algorithm the difference is about 20%. As expected, differences tend to be less when the snow cover extent is greater, particularly for Eurasia. The GSFC 1996 algorithm performs better in North America in each month than dose the Chang et al. (1987) algorithm. This is also the case in Eurasia, except in April and May when the Chang et al.(1987) algorithms is in closer accord to the SDC than is GSFC 1996 algorithm.

  9. Radiation pneumonitis following large single dose irradiation: a re-evaluation based on absolute dose to lung

    SciTech Connect

    Van Dyk, J.; Keane, T.J.; Kan, S.; Rider, W.D.; Fryer, C.J.H.

    1981-04-01

    The acute radiation pneumonitis syndrome is a major complication for patients receiving total thoracic irradiation in a large single dose. Previous studies have evaluated the onset of radiation pneumonitis on the basis of radiation doses calculated assuming unit density tissues. In this report, the incidence of radiation pneumonitis is determined as a function of absolute dose to lung. A simple algorithm relating dose correction factor to anterior-posterior patient diameter has been derived using a CT-aided treatment planning system. This algorithm was used to determine, retrospectively, the dose to lung for a group of 303 patients who had been treated with large field irradiation techniques. Of this group, 150 patients had no previous lung disease and had virtually no additional lung irradiation prior or subsequent to their large field treatment. The actuarial incidence of radiation pneumonitis versus dose to lung was evaluated using a simplified probit analysis. The resultant best fit sigmoidal complication curve demonstrates the onset of radiation pneumonitis to occur at about 750 rad with the 5% actuarial incidence occurring at approximately 820 rad. The errors associated with the dose determination procedure as well as the actuarial incidence calculations are considered. The time of onset of radiation pneumonitis occurs between 1 to 7 months after irradiation for 90% of the patients who developed pneumonitis with the peak incidence occurring at 2 at 3 months. No correlation was found between time of onset and the dose to lung over a dose range of 650 to 1250 rad.

  10. Automated size-specific CT dose monitoring program: Assessing variability in CT dose

    SciTech Connect

    Christianson, Olav; Li Xiang; Frush, Donald; Samei, Ehsan

    2012-11-15

    Purpose: The potential health risks associated with low levels of ionizing radiation have created a movement in the radiology community to optimize computed tomography (CT) imaging protocols to use the lowest radiation dose possible without compromising the diagnostic usefulness of the images. Despite efforts to use appropriate and consistent radiation doses, studies suggest that a great deal of variability in radiation dose exists both within and between institutions for CT imaging. In this context, the authors have developed an automated size-specific radiation dose monitoring program for CT and used this program to assess variability in size-adjusted effective dose from CT imaging. Methods: The authors radiation dose monitoring program operates on an independent health insurance portability and accountability act compliant dosimetry server. Digital imaging and communication in medicine routing software is used to isolate dose report screen captures and scout images for all incoming CT studies. Effective dose conversion factors (k-factors) are determined based on the protocol and optical character recognition is used to extract the CT dose index and dose-length product. The patient's thickness is obtained by applying an adaptive thresholding algorithm to the scout images and is used to calculate the size-adjusted effective dose (ED{sub adj}). The radiation dose monitoring program was used to collect data on 6351 CT studies from three scanner models (GE Lightspeed Pro 16, GE Lightspeed VCT, and GE Definition CT750 HD) and two institutions over a one-month period and to analyze the variability in ED{sub adj} between scanner models and across institutions. Results: No significant difference was found between computer measurements of patient thickness and observer measurements (p= 0.17), and the average difference between the two methods was less than 4%. Applying the size correction resulted in ED{sub adj} that differed by up to 44% from effective dose estimates

  11. Dose calculation accuracies in whole breast radiotherapy treatment planning: a multi-institutional study.

    PubMed

    Hatanaka, Shogo; Miyabe, Yuki; Tohyama, Naoki; Kumazaki, Yu; Kurooka, Masahiko; Okamoto, Hiroyuki; Tachibana, Hidenobu; Kito, Satoshi; Wakita, Akihisa; Ohotomo, Yuko; Ikagawa, Hiroyuki; Ishikura, Satoshi; Nozaki, Miwako; Kagami, Yoshikazu; Hiraoka, Masahiro; Nishio, Teiji

    2015-07-01

    Our objective in this study was to evaluate the variation in the doses delivered among institutions due to dose calculation inaccuracies in whole breast radiotherapy. We have developed practical procedures for quality assurance (QA) of radiation treatment planning systems. These QA procedures are designed to be performed easily at any institution and to permit comparisons of results across institutions. The dose calculation accuracy was evaluated across seven institutions using various irradiation conditions. In some conditions, there was a >3 % difference between the calculated dose and the measured dose. The dose calculation accuracy differs among institutions because it is dependent on both the dose calculation algorithm and beam modeling. The QA procedures in this study are useful for verifying the accuracy of the dose calculation algorithm and of the beam model before clinical use for whole breast radiotherapy. PMID:25646770

  12. Fast reconstruction of low dose proton CT by sinogram interpolation

    NASA Astrophysics Data System (ADS)

    Hansen, David C.; Sangild Sørensen, Thomas; Rit, Simon

    2016-08-01

    Proton computed tomography (CT) has been demonstrated as a promising image modality in particle therapy planning. It can reduce errors in particle range calculations and consequently improve dose calculations. Obtaining a high imaging resolution has traditionally required computationally expensive iterative reconstruction techniques to account for the multiple scattering of the protons. Recently, techniques for direct reconstruction have been developed, but these require a higher imaging dose than the iterative methods. No previous work has compared the image quality of the direct and the iterative methods. In this article, we extend the methodology for direct reconstruction to be applicable for low imaging doses and compare the obtained results with three state-of-the-art iterative algorithms. We find that the direct method yields comparable resolution and image quality to the iterative methods, even at 1 mSv dose levels, while yielding a twentyfold speedup in reconstruction time over previously published iterative algorithms.

  13. Firefly algorithm with chaos

    NASA Astrophysics Data System (ADS)

    Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.

    2013-01-01

    A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.

  14. Cyclic cooling algorithm

    SciTech Connect

    Rempp, Florian; Mahler, Guenter; Michel, Mathias

    2007-09-15

    We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.

  15. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  16. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  17. The effect of dose calculation accuracy on inverse treatment planning

    NASA Astrophysics Data System (ADS)

    Jeraj, Robert; Keall, Paul J.; Siebers, Jeffrey V.

    2002-02-01

    The effect of dose calculation accuracy during inverse treatment planning for intensity modulated radiotherapy (IMRT) was studied in this work. Three dose calculation methods were compared: Monte Carlo, superposition and pencil beam. These algorithms were used to calculate beamlets, which were subsequently used by a simulated annealing algorithm to determine beamlet weights which comprised the optimal solution to the objective function. Three different cases (lung, prostate and head and neck) were investigated and several different objective functions were tested for their effect on inverse treatment planning. It is shown that the use of inaccurate dose calculation introduces two errors in a treatment plan, a systematic error and a convergence error. The systematic error is present because of the inaccuracy of the dose calculation algorithm. The convergence error appears because the optimal intensity distribution for inaccurate beamlets differs from the optimal solution for the accurate beamlets. While the systematic error for superposition was found to be ~1% of Dmax in the tumour and slightly larger outside, the error for the pencil beam method is typically ~5% of Dmax and is rather insensitive to the given objectives. On the other hand, the convergence error was found to be very sensitive to the objective function, is only slightly correlated to the systematic error and should be determined for each case individually. Our results suggest that because of the large systematic and convergence errors, inverse treatment planning systems based on pencil beam algorithms alone should be upgraded either to superposition or Monte Carlo based dose calculations.

  18. Dose tracking and dose auditing in a comprehensive computed tomography dose-reduction program.

    PubMed

    Duong, Phuong-Anh; Little, Brent P

    2014-08-01

    Implementation of a comprehensive computed tomography (CT) radiation dose-reduction program is a complex undertaking, requiring an assessment of baseline doses, an understanding of dose-saving techniques, and an ongoing appraisal of results. We describe the role of dose tracking in planning and executing a dose-reduction program and discuss the use of the American College of Radiology CT Dose Index Registry at our institution. We review the basics of dose-related CT scan parameters, the components of the dose report, and the dose-reduction techniques, showing how an understanding of each technique is important in effective auditing of "outlier" doses identified by dose tracking. PMID:25129210

  19. A Simple Calculator Algorithm.

    ERIC Educational Resources Information Center

    Cook, Lyle; McWilliam, James

    1983-01-01

    The problem of finding cube roots when limited to a calculator with only square root capability is discussed. An algorithm is demonstrated and explained which should always produce a good approximation within a few iterations. (MP)

  20. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  1. Line Thinning Algorithm

    NASA Astrophysics Data System (ADS)

    Feigin, G.; Ben-Yosef, N.

    1983-10-01

    A thinning algorithm, of the banana-peel type, is presented. In each iteration pixels are attacked from all directions (there are no sub-iterations), and the deletion criteria depend on the 24 nearest neighbours.

  2. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  3. Algorithmically specialized parallel computers

    SciTech Connect

    Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.

    1985-01-01

    This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.

  4. OpenEIS Algorithms

    2013-07-29

    The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.

  5. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  6. Dosimetric Algorithm to Reproduce Isodose Curves Obtained from a LINAC

    PubMed Central

    Estrada Espinosa, Julio Cesar; Martínez Ovalle, Segundo Agustín; Pereira Benavides, Cinthia Kotzian

    2014-01-01

    In this work isodose curves are obtained by the use of a new dosimetric algorithm using numerical data from percentage depth dose (PDD) and the maximum absorbed dose profile, calculated by Monte Carlo in a 18 MV LINAC. The software allows reproducing the absorbed dose percentage in the whole irradiated volume quickly and with a good approximation. To validate results an 18 MV LINAC with a whole geometry and a water phantom were constructed. On this construction, the distinct simulations were processed by the MCNPX code and then obtained the PDD and profiles for the whole depths of the radiation beam. The results data were used by the code to produce the dose percentages in any point of the irradiated volume. The absorbed dose for any voxel's size was also reproduced at any point of the irradiated volume, even when the voxels are considered to be of a pixel's size. The dosimetric algorithm is able to reproduce the absorbed dose induced by a radiation beam over a water phantom, considering PDD and profiles, whose maximum percent value is in the build-up region. Calculation time for the algorithm is only a few seconds, compared with the days taken when it is carried out by Monte Carlo. PMID:25045398

  7. Dosimetric algorithm to reproduce isodose curves obtained from a LINAC.

    PubMed

    Estrada Espinosa, Julio Cesar; Martínez Ovalle, Segundo Agustín; Pereira Benavides, Cinthia Kotzian

    2014-01-01

    In this work isodose curves are obtained by the use of a new dosimetric algorithm using numerical data from percentage depth dose (PDD) and the maximum absorbed dose profile, calculated by Monte Carlo in a 18 MV LINAC. The software allows reproducing the absorbed dose percentage in the whole irradiated volume quickly and with a good approximation. To validate results an 18 MV LINAC with a whole geometry and a water phantom were constructed. On this construction, the distinct simulations were processed by the MCNPX code and then obtained the PDD and profiles for the whole depths of the radiation beam. The results data were used by the code to produce the dose percentages in any point of the irradiated volume. The absorbed dose for any voxel's size was also reproduced at any point of the irradiated volume, even when the voxels are considered to be of a pixel's size. The dosimetric algorithm is able to reproduce the absorbed dose induced by a radiation beam over a water phantom, considering PDD and profiles, whose maximum percent value is in the build-up region. Calculation time for the algorithm is only a few seconds, compared with the days taken when it is carried out by Monte Carlo. PMID:25045398

  8. Know your dose: RADDOSE

    PubMed Central

    Paithankar, Karthik S.; Garman, Elspeth F.

    2010-01-01

    The program RADDOSE is widely used to compute the dose absorbed by a macromolecular crystal during an X-ray diffraction experiment. A number of factors affect the absorbed dose, including the incident X-ray flux density, the photon energy and the composition of the macromolecule and of the buffer in the crystal. An experimental dose limit for macromolecular crystallography (MX) of 30 MGy at 100 K has been reported, beyond which the biological information obtained may be compromised. Thus, for the planning of an optimized diffraction experiment the estimation of dose has become an additional tool. A number of approximations were made in the original version of RADDOSE. Recently, the code has been modified in order to take into account fluorescent X-­ray escape from the crystal (version 2) and the inclusion of incoherent (Compton) scattering into the dose calculation is now reported (version 3). The Compton cross-section, although negligible at the energies currently commonly used in MX, should be considered in dose calculations for incident energies above 20 keV. Calculations using version 3 of RADDOSE reinforce previous studies that predict a reduction in the absorbed dose when data are collected at higher energies compared with data collected at 12.4 keV. Hence, a longer irradiation lifetime for the sample can be achieved at these higher energies but this is at the cost of lower diffraction intensities. The parameter ‘diffraction-dose efficiency’, which is the diffracted intensity per absorbed dose, is revisited in an attempt to investigate the benefits and pitfalls of data collection using higher and lower energy radiation, particularly for thin crystals. PMID:20382991

  9. Delivery verification and dose reconstruction in tomotherapy

    NASA Astrophysics Data System (ADS)

    Kapatoes, Jeffrey Michael

    2000-11-01

    It has long been a desire in photon-beam radiation therapy to make use of the significant fraction of the beam exiting the patient to infer how much of the beam energy was actually deposited in the patient. With a linear accelerator and corresponding exit detector mounted on the same ring gantry, tomotherapy provides a unique opportunity to accomplish this. Dose reconstruction describes the process in which the full three-dimensional dose actually deposited in a patient is computed. Dose reconstruction requires two inputs: an image of the patient at the time of treatment and the actual energy fluence delivered. Dose is reconstructed by computing the dose in the CT with the verified energy fluence using any model-based algorithm such as convolution/superposition or Monte Carlo. In tomotherapy, the CT at the time of treatment is obtained by megavoltage CT, the merits of which have been studied and proven. The actual energy fluence delivered to the patient is computed in a process called delivery verification. Methods for delivery verification and dose reconstruction in tomotherapy were investigated in this work. It is shown that delivery verification can be realized by a linear model of the tornotherapy system. However, due to the measurements required with this initial approach, clinical implementation would be difficult. Therefore, a clinically viable method for delivery verification was established, the details of which are discussed. With the verified energy fluence from delivery verification, an assessment of the accuracy and usefulness of dose reconstruction is performed. The latter two topics are presented in the context of a generalized dose comparison tool developed for intensity modulated radiation therapy. Finally, the importance of having a CT from the time of treatment for reconstructing the dose is shown. This is currently a point of contention in modern clinical radiotherapy and it is proven that using the incorrect CT for dose reconstruction can lead

  10. Analysis of the dose calculation accuracy for IMRT in lung: a 2D approach.

    PubMed

    Dvorak, Pavel; Stock, Markus; Kroupa, Bernhard; Bogner, Joachim; Georg, Dietmar

    2007-01-01

    The purpose of this study was to compare the dosimetric accuracy of IMRT plans for targets in lung with the accuracy of standard uniform-intensity conformal radiotherapy for different dose calculation algorithms. Tests were performed utilizing a special phantom manufactured from cork and polystyrene in order to quantify the uncertainty of two commercial TPS for IMRT in the lung. Ionization and film measurements were performed at various measuring points/planes. Additionally, single-beam and uniform-intensity multiple-beam tests were performed, in order to investigate deviations due to other characteristics of IMRT. Helax-TMS V6.1(A) was tested for 6, 10 and 25 MV and BrainSCAN 5.2 for 6 MV photon beams, respectively. Pencil beam (PB) with simple inhomogeneity correction and 'collapsed cone' (CC) algorithms were applied for dose calculations. However, the latter was not incorporated during optimization hence only post-optimization recalculation was tested. Two-dimensional dose distributions were evaluated applying the gamma index concept. Conformal plans showed the same accuracy as IMRT plans. Ionization chamber measurements detected deviations of up to 5% when a PB algorithm was used for IMRT dose calculations. Significant improvement (deviations approximately 2%) was observed when IMRT plans were recalculated with the CC algorithm, especially for the highest nominal energy. All gamma evaluations confirmed substantial improvement with the CC algorithm in 2D. While PB dose distributions showed most discrepancies in lower (<50%) and high (>90%) dose regions, the CC dose distributions deviated mainly in the high dose gradient (20-80%) region. The advantages of IMRT (conformity, intra-target dose control) should be counterbalanced with possible calculation inaccuracies for targets in the lung. Until no superior dose calculation algorithms are involved in the iterative optimization process it should be used with great care. When only PB algorithm with simple

  11. Optimization of Dose Distribution for the System of Linear Accelerator-Based Stereotactic Radiosurgery.

    NASA Astrophysics Data System (ADS)

    Suh, Tae-Suk

    The work suggested in this paper addresses a method for obtaining an optimal dose distribution for stereotactic radiosurgery. Since stereotactic radiosurgery utilizes multiple noncoplanar arcs and a three-dimensional dose evaluation technique, many beam parameters and complex optimization criteria are included in the dose optimization. Consequently, a lengthy computation time is required to optimize even the simplest case by a trial and error method. The basic approach presented here is to use both an analytical and an experimental optimization to minimize the dose to critical organs while maintaining a dose shaped to the target. The experimental approach is based on shaping the target volumes using multiple isocenters from dose experience, or on field shaping using a beam's eye view technique. The analytical approach is to adapt computer -aided design optimization to find optimum parameters automatically. Three-dimensional approximate dose models are developed to simulate the exact dose model using a spherical or cylindrical coordinate system. Optimum parameters are found much faster with the use of computer-aided design optimization techniques. The implementation of computer-aided design algorithms with the approximate dose model and the application of the algorithms to several cases are discussed. It is shown that the approximate dose model gives dose distributions similar to those of the exact dose model, which makes the approximate dose model an attractive alternative to the exact dose model, and much more efficient in terms of computer -aided design and visual optimization.

  12. Gamma Knife radiosurgery with CT image-based dose calculation.

    PubMed

    Xu, Andy Yuanguang; Bhatnagar, Jagdish; Bednarz, Greg; Niranjan, Ajay; Kondziolka, Douglas; Flickinger, John; Lunsford, L Dade; Huq, M Saiful

    2015-01-01

    The Leksell GammaPlan software version 10 introduces a CT image-based segmentation tool for automatic skull definition and a convolution dose calculation algorithm for tissue inhomogeneity correction. The purpose of this work was to evaluate the impact of these new approaches on routine clinical Gamma Knife treatment planning. Sixty-five patients who underwent CT image-guided Gamma Knife radiosurgeries at the University of Pittsburgh Medical Center in recent years were retrospectively investigated. The diagnoses for these cases include trigeminal neuralgia, meningioma, acoustic neuroma, AVM, glioma, and benign and metastatic brain tumors. Dose calculations were performed for each patient with the same dose prescriptions and the same shot arrangements using three different approaches: 1) TMR 10 dose calculation with imaging skull definition; 2) convolution dose calculation with imaging skull definition; 3) TMR 10 dose calculation with conventional measurement-based skull definition. For each treatment matrix, the total treatment time, the target coverage index, the selectivity index, the gradient index, and a set of dose statistics parameters were compared between the three calculations. The dose statistics parameters investigated include the prescription isodose volume, the 12 Gy isodose volume, the minimum, maximum and mean doses on the treatment targets, and the critical structures under consideration. The difference between the convolution and the TMR 10 dose calculations for the 104 treatment matrices were found to vary with the patient anatomy, location of the treatment shots, and the tissue inhomogeneities around the treatment target. An average difference of 8.4% was observed for the total treatment times between the convolution and the TMR algorithms. The maximum differences in the treatment times, the prescription isodose volumes, the 12 Gy isodose volumes, the target coverage indices, the selectivity indices, and the gradient indices from the convolution

  13. Acetaminophen dosing for children

    MedlinePlus

    Taking acetaminophen (Tylenol) can help children with colds and fever feel better. As with all drugs, it is important to give children the correct dose. Acetaminophen is safe when taken as directed. But taking ...

  14. Calculating drug doses.

    PubMed

    2016-09-01

    Numeracy and calculation are key skills for nurses. As nurses are directly accountable for ensuring medicines are prescribed, dispensed and administered safely, they must be able to understand and calculate drug doses. PMID:27615351

  15. Dose-response model for teratological experiments involving quantal responses

    SciTech Connect

    Rai, K.; Van Ryzin, J.

    1985-03-01

    This paper introduces a dose-response model for teratological quantal response data where the probability of response for an offspring from a female at a given dose varies with the litter size. The maximum likelihood estimators for the parameters of the model are given as the solution of a nonlinear iterative algorithm. Two methods of low-dose extrapolation are presented, one based on the litter size distribution and the other a conservative method. The resulting procedures are then applied to a teratological data set from the literature.

  16. A simple analytical method for heterogeneity corrections in low dose rate prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Hueso-González, Fernando; Vijande, Javier; Ballester, Facundo; Perez-Calatayud, Jose; Siebert, Frank-André

    2015-07-01

    In low energy brachytherapy, the presence of tissue heterogeneities contributes significantly to the discrepancies observed between treatment plan and delivered dose. In this work, we present a simplified analytical dose calculation algorithm for heterogeneous tissue. We compare it with Monte Carlo computations and assess its suitability for integration in clinical treatment planning systems. The algorithm, named as RayStretch, is based on the classic equivalent path length method and TG-43 reference data. Analytical and Monte Carlo dose calculations using Penelope2008 are compared for a benchmark case: a prostate patient with calcifications. The results show a remarkable agreement between simulation and algorithm, the latter having, in addition, a high calculation speed. The proposed analytical model is compatible with clinical real-time treatment planning systems based on TG-43 consensus datasets for improving dose calculation and treatment quality in heterogeneous tissue. Moreover, the algorithm is applicable for any type of heterogeneities.

  17. Project resource reallocation algorithm

    NASA Technical Reports Server (NTRS)

    Myers, J. E.

    1981-01-01

    A methodology for adjusting baseline cost estimates according to project schedule changes is described. An algorithm which performs a linear expansion or contraction of the baseline project resource distribution in proportion to the project schedule expansion or contraction is presented. Input to the algorithm consists of the deck of cards (PACE input data) prepared for the baseline project schedule as well as a specification of the nature of the baseline schedule change. Output of the algorithm is a new deck of cards with all work breakdown structure block and element of cost estimates redistributed for the new project schedule. This new deck can be processed through PACE to produce a detailed cost estimate for the new schedule.

  18. Optical rate sensor algorithms

    NASA Technical Reports Server (NTRS)

    Uhde-Lacovara, Jo A.

    1989-01-01

    Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.

  19. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  20. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  1. Patient-specific dose calculation methods for high-dose-rate iridium-192 brachytherapy

    NASA Astrophysics Data System (ADS)

    Poon, Emily S.

    . The scatter dose is again adjusted using our scatter correction technique. The algorithm was tested using phantoms and actual patient plans for head-and-neck, esophagus, and MammoSite breast brachytherapy. Although the method fails to correct for the changes in lateral scatter introduced by inhomogeneities, it is a major improvement over TG-43 and is sufficiently fast for clinical use.

  2. Programming parallel vision algorithms

    SciTech Connect

    Shapiro, L.G.

    1988-01-01

    Computer vision requires the processing of large volumes of data and requires parallel architectures and algorithms to be useful in real-time, industrial applications. The INSIGHT dataflow language was designed to allow encoding of vision algorithms at all levels of the computer vision paradigm. INSIGHT programs, which are relational in nature, can be translated into a graph structure that represents an architecture for solving a particular vision problem or a configuration of a reconfigurable computational network. The authors consider here INSIGHT programs that produce a parallel net architecture for solving low-, mid-, and high-level vision tasks.

  3. New Effective Multithreaded Matching Algorithms

    SciTech Connect

    Manne, Fredrik; Halappanavar, Mahantesh

    2014-05-19

    Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.

  4. A pencil beam algorithm for helium ion beam therapy

    SciTech Connect

    Fuchs, Hermann; Stroebele, Julia; Schreiner, Thomas; Hirtl, Albert; Georg, Dietmar

    2012-11-15

    Purpose: To develop a flexible pencil beam algorithm for helium ion beam therapy. Dose distributions were calculated using the newly developed pencil beam algorithm and validated using Monte Carlo (MC) methods. Methods: The algorithm was based on the established theory of fluence weighted elemental pencil beam (PB) kernels. Using a new real-time splitting approach, a minimization routine selects the optimal shape for each sub-beam. Dose depositions along the beam path were determined using a look-up table (LUT). Data for LUT generation were derived from MC simulations in water using GATE 6.1. For materials other than water, dose depositions were calculated by the algorithm using water-equivalent depth scaling. Lateral beam spreading caused by multiple scattering has been accounted for by implementing a non-local scattering formula developed by Gottschalk. A new nuclear correction was modelled using a Voigt function and implemented by a LUT approach. Validation simulations have been performed using a phantom filled with homogeneous materials or heterogeneous slabs of up to 3 cm. The beams were incident perpendicular to the phantoms surface with initial particle energies ranging from 50 to 250 MeV/A with a total number of 10{sup 7} ions per beam. For comparison a special evaluation software was developed calculating the gamma indices for dose distributions. Results: In homogeneous phantoms, maximum range deviations between PB and MC of less than 1.1% and differences in the width of the distal energy falloff of the Bragg-Peak from 80% to 20% of less than 0.1 mm were found. Heterogeneous phantoms using layered slabs satisfied a {gamma}-index criterion of 2%/2mm of the local value except for some single voxels. For more complex phantoms using laterally arranged bone-air slabs, the {gamma}-index criterion was exceeded in some areas giving a maximum {gamma}-index of 1.75 and 4.9% of the voxels showed {gamma}-index values larger than one. The calculation precision of the

  5. Experimental validation of the Eclipse AAA algorithm.

    PubMed

    Breitman, Karen; Rathee, Satyapal; Newcomb, Chris; Murray, Brad; Robinson, Donald; Field, Colin; Warkentin, Heather; Connors, Sherry; Mackenzie, Marc; Dunscombe, Peter; Fallone, Gino

    2007-01-01

    The present study evaluates the performance of a newly released photon-beam dose calculation algorithm that is incorporated into an established treatment planning system (TPS). We compared the analytical anisotropic algorithm (AAA) factory-commissioned with "golden beam data" for Varian linear accelerators with measurements performed at two institutions using 6-MV and 15-MV beams. The TG-53 evaluation regions and criteria were used to evaluate profiles measured in a water phantom for a wide variety of clinically relevant beam geometries. The total scatter factor (TSF) for each of these geometries was also measured and compared against the results from the AAA. At one institute, TLD measurements were performed at several points in the neck and thoracic regions of a Rando phantom; at the other institution, ion chamber measurements were performed in a CIRS inhomogeneous phantom. The phantoms were both imaged using computed tomography (CT), and the dose was calculated using the AAA at corresponding detector locations. Evaluation of measured relative dose profiles revealed that 97%, 99%, 97%, and 100% of points at one institute and 96%, 88%, 89%, and 100% of points at the other institution passed TG-53 evaluation criteria in the outer beam, penumbra, inner beam, and buildup regions respectively. Poorer results in the inner beam regions at one institute are attributed to the mismatch of the measured profiles at shallow depths with the "golden beam data." For validation of monitor unit (MU) calculations, the mean difference between measured and calculated TSFs was less than 0.5%; test cases involving physical wedges had, in general, differences of more than 1%. The mean difference between point measurements performed in inhomogeneous phantoms and Eclipse was 2.1% (5.3% maximum) and all differences were within TG-53 guidelines of 7%. By intent, the methods and evaluation techniques were similar to those in a previous investigation involving another convolution

  6. An algorithm for neurite outgrowth reconstruction

    NASA Technical Reports Server (NTRS)

    Weaver, Christina M.; Pinezich, John D.; Lindquist, W. Brent; Vazquez, Marcelo E.

    2003-01-01

    We present a numerical method which provides the ability to analyze digitized microscope images of retinal explants and quantify neurite outgrowth. Few parameters are required as input and limited user interaction is necessary to process an entire experiment of images. This eliminates fatigue related errors and user-related bias common to manual analysis. The method does not rely on stained images and handles images of variable quality. The algorithm is used to determine time and dose dependent, in vitro, neurotoxic effects of 1 GeV per nucleon iron particles in retinal explants. No neurotoxic effects are detected until 72 h after exposure; at 72 h, significant reductions of neurite outgrowth occurred at doses higher than 10 cGy.

  7. Assessment of phase based dose modulation for improved dose efficiency in cardiac CT on an anthropomorphic motion phantom

    NASA Astrophysics Data System (ADS)

    Budde, Adam; Nilsen, Roy; Nett, Brian

    2014-03-01

    State of the art automatic exposure control modulates the tube current across view angle and Z based on patient anatomy for use in axial full scan reconstructions. Cardiac CT, however, uses a fundamentally different image reconstruction that applies a temporal weighting to reduce motion artifacts. This paper describes a phase based mA modulation that goes beyond axial and ECG modulation; it uses knowledge of the temporal view weighting applied within the reconstruction algorithm to improve dose efficiency in cardiac CT scanning. Using physical phantoms and synthetic noise emulation, we measure how knowledge of sinogram temporal weighting and the prescribed cardiac phase can be used to improve dose efficiency. First, we validated that a synthetic CT noise emulation method produced realistic image noise. Next, we used the CT noise emulation method to simulate mA modulation on scans of a physical anthropomorphic phantom where a motion profile corresponding to a heart rate of 60 beats per minute was used. The CT noise emulation method matched noise to lower dose scans across the image within 1.5% relative error. Using this noise emulation method to simulate modulating the mA while keeping the total dose constant, the image variance was reduced by an average of 11.9% on a scan with 50 msec padding, demonstrating improved dose efficiency. Radiation dose reduction in cardiac CT can be achieved while maintaining the same level of image noise through phase based dose modulation that incorporates knowledge of the cardiac reconstruction algorithm.

  8. Effect of deformable registration on the dose calculated in radiation therapy planning CT scans of lung cancer patients

    SciTech Connect

    Cunliffe, Alexandra R.; Armato, Samuel G.; White, Bradley; Justusson, Julia; Contee, Clay; Malik, Renuka; Al-Hallaq, Hania A.

    2015-01-15

    Purpose: To characterize the effects of deformable image registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60 Gy, 2 Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pretherapy (4–75 days) CT scan and a treatment planning scan with an associated dose map were collected. To establish correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pretherapy scans then were coregistered with planning scans (and associated dose maps) using the demons deformable registration algorithm and two variants of the Fraunhofer MEVIS algorithm (“Fast” and “EMPIRE10”). Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from each of the three algorithms. The Euclidean distance between manually and automatically mapped landmark points (d{sub E}) and the absolute difference in planned dose (|ΔD|) were calculated. Using regression modeling, |ΔD| was modeled as a function of d{sub E}, dose (D), dose standard deviation (SD{sub dose}) in an eight-pixel neighborhood, and the registration algorithm used. Results: Over 1400 landmark point pairs were identified, with 58–93 (median: 84) points identified per patient. Average |ΔD| across patients was 3.5 Gy (range: 0.9–10.6 Gy). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, with an average d{sub E} across patients of 5.2 mm (compared with >7 mm for the other two algorithms). Consequently, average |ΔD| was also lowest using the Fraunhofer MEVIS EMPIRE10 algorithm. |ΔD| increased significantly as a function of d{sub E} (0.42 Gy/mm), D (0.05 Gy/Gy), SD{sub dose} (1.4 Gy/Gy), and the algorithm used (≤1 Gy). Conclusions: An

  9. Utirik Atoll Dose Assessment

    SciTech Connect

    Robison, W.L.; Conrado, C.L.; Bogen, K.T

    1999-10-06

    On March 1, 1954, radioactive fallout from the nuclear test at Bikini Atoll code-named BRAVO was deposited on Utirik Atoll which lies about 187 km (300 miles) east of Bikini Atoll. The residents of Utirik were evacuated three days after the fallout started and returned to their atoll in May 1954. In this report we provide a final dose assessment for current conditions at the atoll based on extensive data generated from samples collected in 1993 and 1994. The estimated population average maximum annual effective dose using a diet including imported foods is 0.037 mSv y{sup -1} (3.7 mrem y{sup -1}). The 95% confidence limits are within a factor of three of their population average value. The population average integrated effective dose over 30-, 50-, and 70-y is 0.84 mSv (84, mrem), 1.2 mSv (120 mrem), and 1.4 mSv (140 mrem), respectively. The 95% confidence limits on the population-average value post 1998, i.e., the 30-, 50-, and 70-y integral doses, are within a factor of two of the mean value and are independent of time, t, for t > 5 y. Cesium-137 ({sup 137}Cs) is the radionuclide that contributes most of this dose, mostly through the terrestrial food chain and secondarily from external gamma exposure. The dose from weapons-related radionuclides is very low and of no consequence to the health of the population. The annual background doses in the U. S. and Europe are 3.0 mSv (300 mrem), and 2.4 mSv (240 mrem), respectively. The annual background dose in the Marshall Islands is estimated to be 1.4 mSv (140 mrem). The total estimated combined Marshall Islands background dose plus the weapons-related dose is about 1.5 mSv y{sup -1} (150 mrem y{sup -1}) which can be directly compared to the annual background effective dose of 3.0 mSv y{sup -1} (300 mrem y{sup -1}) for the U. S. and 2.4 mSv y{sup -1} (240 mrem y{sup -1}) for Europe. Moreover, the doses listed in this report are based only on the radiological decay of {sup 137}Cs (30.1 y half-life) and other

  10. In vivo verification of radiation dose delivered to healthy tissue during radiotherapy for breast cancer

    NASA Astrophysics Data System (ADS)

    Lonski, P.; Taylor, M. L.; Hackworth, W.; Phipps, A.; Franich, R. D.; Kron, T.

    2014-03-01

    Different treatment planning system (TPS) algorithms calculate radiation dose in different ways. This work compares measurements made in vivo to the dose calculated at out-of-field locations using three different commercially available algorithms in the Eclipse treatment planning system. LiF: Mg, Cu, P thermoluminescent dosimeter (TLD) chips were placed with 1 cm build-up at six locations on the contralateral side of 5 patients undergoing radiotherapy for breast cancer. TLD readings were compared to calculations of Pencil Beam Convolution (PBC), Anisotropic Analytical Algorithm (AAA) and Acuros XB (XB). AAA predicted zero dose at points beyond 16 cm from the field edge. In the same region PBC returned an unrealistically constant result independent of distance and XB showed good agreement to measured data although consistently underestimated by ~0.1 % of the prescription dose. At points closer to the field edge XB was the superior algorithm, exhibiting agreement with TLD results to within 15 % of measured dose. Both AAA and PBC showed mixed agreement, with overall discrepancies considerably greater than XB. While XB is certainly the preferable algorithm, it should be noted that TPS algorithms in general are not designed to calculate dose at peripheral locations and calculation results in such regions should be treated with caution.

  11. Robotic Follow Algorithm

    2005-03-30

    The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.

  12. Data Structures and Algorithms.

    ERIC Educational Resources Information Center

    Wirth, Niklaus

    1984-01-01

    Built-in data structures are the registers and memory words where binary values are stored; hard-wired algorithms are the fixed rules, embodied in electronic logic circuits, by which stored data are interpreted as instructions to be executed. Various topics related to these two basic elements of every computer program are discussed. (JN)

  13. General cardinality genetic algorithms

    PubMed

    Koehler; Bhattacharyya; Vose

    1997-01-01

    A complete generalization of the Vose genetic algorithm model from the binary to higher cardinality case is provided. Boolean AND and EXCLUSIVE-OR operators are replaced by multiplication and addition over rings of integers. Walsh matrices are generalized with finite Fourier transforms for higher cardinality usage. Comparison of results to the binary case are provided. PMID:10021767

  14. The Lure of Algorithms

    ERIC Educational Resources Information Center

    Drake, Michael

    2011-01-01

    One debate that periodically arises in mathematics education is the issue of how to teach calculation more effectively. "Modern" approaches seem to initially favour mental calculation, informal methods, and the development of understanding before introducing written forms, while traditionalists tend to champion particular algorithms. The debate is…

  15. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  16. Bayesian population modeling of drug dosing adherence.

    PubMed

    Fellows, Kelly; Stoneking, Colin J; Ramanathan, Murali

    2015-10-01

    Adherence is a frequent contributing factor to variations in drug concentrations and efficacy. The purpose of this work was to develop an integrated population model to describe variation in adherence, dose-timing deviations, overdosing and persistence to dosing regimens. The hybrid Markov chain-von Mises method for modeling adherence in individual subjects was extended to the population setting using a Bayesian approach. Four integrated population models for overall adherence, the two-state Markov chain transition parameters, dose-timing deviations, overdosing and persistence were formulated and critically compared. The Markov chain-Monte Carlo algorithm was used for identifying distribution parameters and for simulations. The model was challenged with medication event monitoring system data for 207 hypertension patients. The four Bayesian models demonstrated good mixing and convergence characteristics. The distributions of adherence, dose-timing deviations, overdosing and persistence were markedly non-normal and diverse. The models varied in complexity and the method used to incorporate inter-dependence with the preceding dose in the two-state Markov chain. The model that incorporated a cooperativity term for inter-dependence and a hyperbolic parameterization of the transition matrix probabilities was identified as the preferred model over the alternatives. The simulated probability densities from the model satisfactorily fit the observed probability distributions of adherence, dose-timing deviations, overdosing and persistence parameters in the sample patients. The model also adequately described the median and observed quartiles for these parameters. The Bayesian model for adherence provides a parsimonious, yet integrated, description of adherence in populations. It may find potential applications in clinical trial simulations and pharmacokinetic-pharmacodynamic modeling. PMID:26319548

  17. Three-dimensional gamma analysis of dose distributions in individual structures for IMRT dose verification.

    PubMed

    Tomiyama, Yuuki; Araki, Fujio; Oono, Takeshi; Hioki, Kazunari

    2014-07-01

    Our purpose in this study was to implement three-dimensional (3D) gamma analysis for structures of interest such as the planning target volume (PTV) or clinical target volume (CTV), and organs at risk (OARs) for intensity-modulated radiation therapy (IMRT) dose verification. IMRT dose distributions for prostate and head and neck (HN) cancer patients were calculated with an analytical anisotropic algorithm in an Eclipse (Varian Medical Systems) treatment planning system (TPS) and by Monte Carlo (MC) simulation. The MC dose distributions were calculated with EGSnrc/BEAMnrc and DOSXYZnrc user codes under conditions identical to those for the TPS. The prescribed doses were 76 Gy/38 fractions with five-field IMRT for the prostate and 33 Gy/17 fractions with seven-field IMRT for the HN. TPS dose distributions were verified by the gamma passing rates for the whole calculated volume, PTV or CTV, and OARs by use of 3D gamma analysis with reference to MC dose distributions. The acceptance criteria for the 3D gamma analysis were 3/3 and 2 %/2 mm for a dose difference and a distance to agreement. The gamma passing rates in PTV and OARs for the prostate IMRT plan were close to 100 %. For the HN IMRT plan, the passing rates of 2 %/2 mm in CTV and OARs were substantially lower because inhomogeneous tissues such as bone and air in the HN are included in the calculation area. 3D gamma analysis for individual structures is useful for IMRT dose verification. PMID:24796955

  18. A Novel Admixture-Based Pharmacogenetic Approach to Refine Warfarin Dosing in Caribbean Hispanics

    PubMed Central

    Claudio-Campos, Karla; Rivera-Miranda, Giselle; Bermúdez-Bosch, Luis; Renta, Jessicca Y.; Cadilla, Carmen L.; Cruz, Iadelisse; Feliu, Juan F.; Vergara, Cunegundo; Ruaño, Gualberto

    2016-01-01

    Aim This study is aimed at developing a novel admixture-adjusted pharmacogenomic approach to individually refine warfarin dosing in Caribbean Hispanic patients. Patients & Methods A multiple linear regression analysis of effective warfarin doses versus relevant genotypes, admixture, clinical and demographic factors was performed in 255 patients and further validated externally in another cohort of 55 individuals. Results The admixture-adjusted, genotype-guided warfarin dosing refinement algorithm developed in Caribbean Hispanics showed better predictability (R2 = 0.70, MAE = 0.72mg/day) than a clinical algorithm that excluded genotypes and admixture (R2 = 0.60, MAE = 0.99mg/day), and outperformed two prior pharmacogenetic algorithms in predicting effective dose in this population. For patients at the highest risk of adverse events, 45.5% of the dose predictions using the developed pharmacogenetic model resulted in ideal dose as compared with only 29% when using the clinical non-genetic algorithm (p<0.001). The admixture-driven pharmacogenetic algorithm predicted 58% of warfarin dose variance when externally validated in 55 individuals from an independent validation cohort (MAE = 0.89 mg/day, 24% mean bias). Conclusions Results supported our rationale to incorporate individual’s genotypes and unique admixture metrics into pharmacogenetic refinement models in order to increase predictability when expanding them to admixed populations like Caribbean Hispanics. Trial Registration ClinicalTrials.gov NCT01318057 PMID:26745506

  19. Implementation of a dose gradient method into optimization of dose distribution in prostate cancer 3D-CRT plans

    PubMed Central

    Giżyńska, Marta K.; Kukołowicz, Paweł F.; Kordowski, Paweł

    2014-01-01

    Aim The aim of this work is to present a method of beam weight and wedge angle optimization for patients with prostate cancer. Background 3D-CRT is usually realized with forward planning based on a trial and error method. Several authors have published a few methods of beam weight optimization applicable to the 3D-CRT. Still, none on these methods is in common use. Materials and methods Optimization is based on the assumption that the best plan is achieved if dose gradient at ICRU point is equal to zero. Our optimization algorithm requires beam quality index, depth of maximum dose, profiles of wedged fields and maximum dose to femoral heads. The method was tested for 10 patients with prostate cancer, treated with the 3-field technique. Optimized plans were compared with plans prepared by 12 experienced planners. Dose standard deviation in target volume, and minimum and maximum doses were analyzed. Results The quality of plans obtained with the proposed optimization algorithms was comparable to that prepared by experienced planners. Mean difference in target dose standard deviation was 0.1% in favor of the plans prepared by planners for optimization of beam weights and wedge angles. Introducing a correction factor for patient body outline for dose gradient at ICRU point improved dose distribution homogeneity. On average, a 0.1% lower standard deviation was achieved with the optimization algorithm. No significant difference in mean dose–volume histogram for the rectum was observed. Conclusions Optimization shortens very much time planning. The average planning time was 5 min and less than a minute for forward and computer optimization, respectively. PMID:25337411

  20. Dose Reduction Techniques

    SciTech Connect

    WAGGONER, L.O.

    2000-05-16

    As radiation safety specialists, one of the things we are required to do is evaluate tools, equipment, materials and work practices and decide whether the use of these products or work practices will reduce radiation dose or risk to the environment. There is a tendency for many workers that work with radioactive material to accomplish radiological work the same way they have always done it rather than look for new technology or change their work practices. New technology is being developed all the time that can make radiological work easier and result in less radiation dose to the worker or reduce the possibility that contamination will be spread to the environment. As we discuss the various tools and techniques that reduce radiation dose, keep in mind that the radiological controls should be reasonable. We can not always get the dose to zero, so we must try to accomplish the work efficiently and cost-effectively. There are times we may have to accept there is only so much you can do. The goal is to do the smart things that protect the worker but do not hinder him while the task is being accomplished. In addition, we should not demand that large amounts of money be spent for equipment that has marginal value in order to save a few millirem. We have broken the handout into sections that should simplify the presentation. Time, distance, shielding, and source reduction are methods used to reduce dose and are covered in Part I on work execution. We then look at operational considerations, radiological design parameters, and discuss the characteristics of personnel who deal with ALARA. This handout should give you an overview of what it takes to have an effective dose reduction program.

  1. Computing Proton Dose to Irregularly Moving Targets

    PubMed Central

    Phillips, Justin; Gueorguiev, Gueorgui; Shackleford, James A.; Grassberger, Clemens; Dowdell, Stephen; Paganetti, Harald; Sharp, Gregory C.

    2014-01-01

    phantom (2 mm, 2%), and 90.8% (3 mm, 3%)for the patient data. Conclusions We have demonstrated a method for accurately reproducing proton dose to an irregularly moving target from a single CT image. We believe this algorithm could prove a useful tool to study the dosimetric impact of baseline shifts either before or during treatment. PMID:25029239

  2. Computing proton dose to irregularly moving targets

    NASA Astrophysics Data System (ADS)

    Phillips, Justin; Gueorguiev, Gueorgui; Shackleford, James A.; Grassberger, Clemens; Dowdell, Stephen; Paganetti, Harald; Sharp, Gregory C.

    2014-08-01

    phantom (2 mm, 2%), and 90.8% (3 mm, 3%)for the patient data. Conclusions: We have demonstrated a method for accurately reproducing proton dose to an irregularly moving target from a single CT image. We believe this algorithm could prove a useful tool to study the dosimetric impact of baseline shifts either before or during treatment.

  3. Dose Calculation Spreadsheet

    1997-06-10

    VENTSAR XL is an EXCEL Spreadsheet that can be used to calculate downwind doses as a result of a hypothetical atmospheric release. Both building effects and plume rise may be considered. VENTSAR XL will run using any version of Microsoft EXCEL version 4.0 or later. Macros (the programming language of EXCEL) was used to automate the calculations. The user enters a minimal amount of input and the code calculates the resulting concentrations and doses atmore » various downwind distances as specified by the user.« less

  4. A computerized framework for monitoring four-dimensional dose distributions during stereotactic body radiation therapy using a portal dose image-based 2D/3D registration approach.

    PubMed

    Nakamoto, Takahiro; Arimura, Hidetaka; Nakamura, Katsumasa; Shioyama, Yoshiyuki; Mizoguchi, Asumi; Hirose, Taka-Aki; Honda, Hiroshi; Umezu, Yoshiyuki; Nakamura, Yasuhiko; Hirata, Hideki

    2015-03-01

    A computerized framework for monitoring four-dimensional (4D) dose distributions during stereotactic body radiation therapy based on a portal dose image (PDI)-based 2D/3D registration approach has been proposed in this study. Using the PDI-based registration approach, simulated 4D "treatment" CT images were derived from the deformation of 3D planning CT images so that a 2D planning PDI could be similar to a 2D dynamic clinical PDI at a breathing phase. The planning PDI was calculated by applying a dose calculation algorithm (a pencil beam convolution algorithm) to the geometry of the planning CT image and a virtual water equivalent phantom. The dynamic clinical PDIs were estimated from electronic portal imaging device (EPID) dynamic images including breathing phase data obtained during a treatment. The parameters of the affine transformation matrix were optimized based on an objective function and a gamma pass rate using a Levenberg-Marquardt (LM) algorithm. The proposed framework was applied to the EPID dynamic images of ten lung cancer patients, which included 183 frames (mean: 18.3 per patient). The 4D dose distributions during the treatment time were successfully obtained by applying the dose calculation algorithm to the simulated 4D "treatment" CT images. The mean±standard deviation (SD) of the percentage errors between the prescribed dose and the estimated dose at an isocenter for all cases was 3.25±4.43%. The maximum error for the ten cases was 14.67% (prescribed dose: 1.50Gy, estimated dose: 1.72Gy), and the minimum error was 0.00%. The proposed framework could be feasible for monitoring the 4D dose distribution and dose errors within a patient's body during treatment. PMID:25592290

  5. A spatially encoded dose difference maximal intensity projection map for patient dose evaluation: A new first line patient quality assurance tool

    SciTech Connect

    Hu Weigang; Graff, Pierre; Boettger, Thomas; Pouliot, Jean; and others

    2011-04-15

    Purpose: To develop a spatially encoded dose difference maximal intensity projection (DD-MIP) as an online patient dose evaluation tool for visualizing the dose differences between the planning dose and dose on the treatment day. Methods: Megavoltage cone-beam CT (MVCBCT) images acquired on the treatment day are used for generating the dose difference index. Each index is represented by different colors for underdose, acceptable, and overdose regions. A maximal intensity projection (MIP) algorithm is developed to compress all the information of an arbitrary 3D dose difference index into a 2D DD-MIP image. In such an algorithm, a distance transformation is generated based on the planning CT. Then, two new volumes representing the overdose and underdose regions of the dose difference index are encoded with the distance transformation map. The distance-encoded indices of each volume are normalized using the skin distance obtained on the planning CT. After that, two MIPs are generated based on the underdose and overdose volumes with green-to-blue and green-to-red lookup tables, respectively. Finally, the two MIPs are merged with an appropriate transparency level and rendered in planning CT images. Results: The spatially encoded DD-MIP was implemented in a dose-guided radiotherapy prototype and tested on 33 MVCBCT images from six patients. The user can easily establish the threshold for the overdose and underdose. A 3% difference between the treatment and planning dose was used as the threshold in the study; hence, the DD-MIP shows red or blue color for the dose difference >3% or {<=}3%, respectively. With such a method, the overdose and underdose regions can be visualized and distinguished without being overshadowed by superficial dose differences. Conclusions: A DD-MIP algorithm was developed that compresses information from 3D into a single or two orthogonal projections while hinting the user whether the dose difference is on the skin surface or deeper.

  6. Calculation of Residual Dose Around Small Objects Using Mu2e Target as an Example

    SciTech Connect

    Pronskikh, V.S.; Leveling, A.F.; Mokhov, N.V.; Rakhno, I.L.; Aarnio, P.; /Aalto U.

    2011-09-01

    The MARS15 code provides contact residual dose rates for relatively large accelerator and experimental components for predefined irradiation and cooling times. The dose rate at particular distances from the components, some of which can be rather small in size, is calculated in a post Monte-Carlo stage via special algorithms described elsewhere. The approach is further developed and described in this paper.

  7. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  8. Reactive Collision Avoidance Algorithm

    NASA Technical Reports Server (NTRS)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  9. An estimate of the propagated uncertainty for a dosemeter algorithm used for personnel monitoring.

    PubMed

    Veinot, K G

    2015-03-01

    The Y-12 National Security Complex utilises thermoluminescent dosemeters (TLDs) to monitor personnel for external radiation doses. The TLD consist of four elements positioned behind various filters, and dosemeters are processed on site and input into an algorithm to determine worker dose. When processing dosemeters and determining the dose equivalent to the worker, a number of steps are involved, including TLD reader calibration, TLD element calibration, corrections for fade and background, and inherent sensitivities of the dosemeter algorithm. In order to better understand the total uncertainty in calculated doses, a series of calculations were performed using certain assumptions and measurement data. Individual contributions to the uncertainty were propagated through the process, including final dose calculations for a number of representative source types. Although the uncertainty in a worker's calculated dose is not formally reported, these calculations can be used to verify the adequacy of a facility's dosimetry process. PMID:25009187

  10. Low-Dose Carcinogenicity Studies

    EPA Science Inventory

    One of the major deficiencies of cancer risk assessments is the lack of low-dose carcinogenicity data. Most assessments require extrapolation from high to low doses, which is subject to various uncertainties. Only 4 low-dose carcinogenicity studies and 5 low-dose biomarker/pre-n...

  11. Multiple-dose acetaminophen pharmacokinetics.

    PubMed

    Sahajwalla, C G; Ayres, J W

    1991-09-01

    Four different treatments of acetaminophen (Tylenol) were administered in multiple doses to eight healthy volunteers. Each treatment (325, 650, 825, and 1000 mg) was administered five times at 6-h intervals. Saliva acetaminophen concentration versus time profiles were determined. Noncompartmental pharmacokinetic parameters were calculated and compared to determine whether acetaminophen exhibited linear or dose-dependent pharmacokinetics. For doses less than or equal to 18 mg/kg, area under the curve (AUC), half-life (t1/2), mean residence time (MRT), and ratio of AUC to dose for the first dose were compared with the last dose. No statistically significant differences were observed in dose-corrected AUC for the first or last dose among subjects or treatments. Half-lives and MRT were not significantly different among treatments for the first or the last dose. Statistically significant differences in t1/2 and MRT were noted (p less than 0.05) among subjects for the last dose. A plot of AUC versus dose for the first and the last doses exhibited a linear relationship. Dose-corrected saliva concentration versus time curves for the treatments were superimposable. Thus, acetaminophen exhibits linear pharmacokinetics for doses of 18 mg/kg or less. Plots of AUC versus dose for one subject who received doses higher than 18 mg/kg were curved, suggesting nonlinear behavior of acetaminophen in this subject. PMID:1800709

  12. LADTAPXL Aqueous Dose Spreadsheet

    1999-08-10

    LADTAPXL is an EXCEL spreadsheet model of the NRC computer code LADTAP. LADTAPXL calculates maximally exposed individual and population doses from chronic liquid releases. Environmental pathways include external exposure resulting from recreational activities on the Savannah River and ingestion of water, fish, and invertebrates of Savannah River origin.

  13. New Antibiotic Dosing

    PubMed Central

    Pineda, Leslie C.; Watt, Kevin M.

    2015-01-01

    Infection is common in premature infants and can cause significant morbidity and mortality. To prevent these devastating consequences, most infants admitted to the neonatal intensive care unit (NICU) are exposed to antibiotics. However, dosing regimens are often extrapolated from data in adults and older children, increasing the risk for drug toxicity and lack of clinical efficacy because they fail to account for developmental changes in infant physiology. Despite legislation promoting and, in some cases, requiring pediatric drug studies, infants remain therapeutic orphans who often receive drugs "off-label" without data from clinical trials. Pharmacokinetic (PK) studies in premature infants have been scarce due to low study consent rates; limited blood volume available to conduct PK studies; difficulty in obtaining blood from infants; limited use of sensitive, low-volume drug concentration assays; and a lack of expertise in pediatric modeling and simulation. However, newer technologies are emerging with minimal-risk study designs, including ultra-low-volume assays, PK modeling and simulation, and opportunistic drug protocols. With minimal-risk study designs, PK data and dosing regimens for infants are now available for antibiotics commonly used in the NICU, including ampicillin, clindamycin, meropenem, metronidazole, and piperacillin/tazobactam. The discrepancy between previous dosing recommendations extrapolated from adult data and newer dosing regimens based on infant PK studies highlights the need to conduct PK studies in premature infants. PMID:25678003

  14. When is a dose not a dose

    SciTech Connect

    Bond, V.P.

    1991-01-01

    Although an enormous amount of progress has been made in the fields of radiation protection and risk assessment, a number of significant problems remain. The one problem which transcends all the rest, and which has been subject to considerable misunderstanding, involves what has come to be known as the 'linear non-threshold hypothesis', or 'linear hypothesis'. Particularly troublesome has been the interpretation that any amount of radiation can cause an increase in the excess incidence of cancer. The linear hypothesis has dominated radiation protection philosophy for more than three decades, with enormous financial, societal and political impacts and has engendered an almost morbid fear of low-level exposure to ionizing radiation in large segments of the population. This document presents a different interpretation of the linear hypothesis. The basis for this view lies in the evolution of dose-response functions, particularly with respect to their use initially in the context of early acute effects, and then for the late effects, carcinogenesis and mutagenesis. 11 refs., 4 figs. (MHB)

  15. CAVITY CONTROL ALGORITHM

    SciTech Connect

    Tomasz Plawski, J. Hovater

    2010-09-01

    A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.

  16. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1987-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.

  17. Algorithms, games, and evolution

    PubMed Central

    Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh

    2014-01-01

    Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: “What algorithm could possibly achieve all this in a mere three and a half billion years?” In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution. PMID:24979793

  18. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1989-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor that support these conclusions, are detailed.

  19. The Relegation Algorithm

    NASA Astrophysics Data System (ADS)

    Deprit, André; Palacián, Jesúus; Deprit, Etienne

    2001-03-01

    The relegation algorithm extends the method of normalization by Lie transformations. Given a Hamiltonian that is a power series ℋ = ℋ0+ ɛℋ1+ ... of a small parameter ɛ, normalization constructs a map which converts the principal part ℋ0into an integral of the transformed system — relegation does the same for an arbitrary function ℋ[G]. If the Lie derivative induced by ℋ[G] is semi-simple, a double recursion produces the generator of the relegating transformation. The relegation algorithm is illustrated with an elementary example borrowed from galactic dynamics; the exercise serves as a standard against which to test software implementations. Relegation is also applied to the more substantial example of a Keplerian system perturbed by radiation pressure emanating from a rotating source.

  20. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  1. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  2. Algorithm Visualization System for Teaching Spatial Data Algorithms

    ERIC Educational Resources Information Center

    Nikander, Jussi; Helminen, Juha; Korhonen, Ari

    2010-01-01

    TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…

  3. A novel method for 4D measurement-guided planned dose perturbation to estimate patient dose/DVH changes due to interplay

    NASA Astrophysics Data System (ADS)

    Nelms, B.; Feygelman, V.

    2013-06-01

    As IMRT/VMAT technology continues to evolve, so do the dosimetric QA methods. We present the theoretical framework for the novel planned dose perturbation algorithm. It allows not only to reconstruct the 3D volumetric doe on a patient from a measurement in a cylindrical phantom, but also to incorporate the effects of the interplay between the intrafractional organ motion and dynamic delivery. Unlike in our previous work, this 4D dose reconstruction does not require the knowledge of the TPS dose for each control point of the plan, making the method much more practical. Motion is viewed as just another source of error, accounted for by perturbing (morphing) the planned dose distribution based on the limited empirical dose from the phantom measurement. The strategy for empirical verification of the algorithm is presented as the necessary next step.

  4. SPA: Solar Position Algorithm

    NASA Astrophysics Data System (ADS)

    Reda, Ibrahim; Andreas, Afshin

    2015-04-01

    The Solar Position Algorithm (SPA) calculates the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of +/- 0.0003 degrees based on the date, time, and location on Earth. SPA is implemented in C; in addition to being available for download, an online calculator using this code is available at http://www.nrel.gov/midc/solpos/spa.html.

  5. Quantum defragmentation algorithm

    SciTech Connect

    Burgarth, Daniel; Giovannetti, Vittorio

    2010-08-15

    In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.

  6. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  7. Sarsat location algorithms

    NASA Astrophysics Data System (ADS)

    Nardi, Jerry

    The Satellite Aided Search and Rescue (Sarsat) is designed to detect and locate distress beacons using satellite receivers. Algorithms used for calculating the positions of 406 MHz beacons and 121.5/243 MHz beacons are presented. The techniques for matching, resolving and averaging calculated locations from multiple satellite passes are also described along with results pertaining to single pass and multiple pass location estimate accuracy.

  8. Dose reconstruction for real-time patient-specific dose estimation in CT

    SciTech Connect

    De Man, Bruno Yin, Zhye; Wu, Mingye; FitzGerald, Paul; Kalra, Mannudeep

    2015-05-15

    Purpose: Many recent computed tomography (CT) dose reduction approaches belong to one of three categories: statistical reconstruction algorithms, efficient x-ray detectors, and optimized CT acquisition schemes with precise control over the x-ray distribution. The latter category could greatly benefit from fast and accurate methods for dose estimation, which would enable real-time patient-specific protocol optimization. Methods: The authors present a new method for volumetrically reconstructing absorbed dose on a per-voxel basis, directly from the actual CT images. The authors’ specific implementation combines a distance-driven pencil-beam approach to model the first-order x-ray interactions with a set of Gaussian convolution kernels to model the higher-order x-ray interactions. The authors performed a number of 3D simulation experiments comparing the proposed method to a Monte Carlo based ground truth. Results: The authors’ results indicate that the proposed approach offers a good trade-off between accuracy and computational efficiency. The images show a good qualitative correspondence to Monte Carlo estimates. Preliminary quantitative results show errors below 10%, except in bone regions, where the authors see a bigger model mismatch. The computational complexity is similar to that of a low-resolution filtered-backprojection algorithm. Conclusions: The authors present a method for analytic dose reconstruction in CT, similar to the techniques used in radiation therapy planning with megavoltage energies. Future work will include refinements of the proposed method to improve the accuracy as well as a more extensive validation study. The proposed method is not intended to replace methods that track individual x-ray photons, but the authors expect that it may prove useful in applications where real-time patient-specific dose estimation is required.

  9. Algorithms for builder guidelines

    SciTech Connect

    Balcomb, J.D.; Lekov, A.B.

    1989-06-01

    The Builder Guidelines are designed to make simple, appropriate guidelines available to builders for their specific localities. Builders may select from passive solar and conservation strategies with different performance potentials. They can then compare the calculated results for their particular house design with a typical house in the same location. Algorithms used to develop the Builder Guidelines are described. The main algorithms used are the monthly solar ratio (SLR) method for winter heating, the diurnal heat capacity (DHC) method for temperature swing, and a new simplified calculation method (McCool) for summer cooling. This paper applies the algorithms to estimate the performance potential of passive solar strategies, and the annual heating and cooling loads of various combinations of conservation and passive solar strategies. The basis of the McCool method is described. All three methods are implemented in a microcomputer program used to generate the guideline numbers. Guidelines for Denver, Colorado, are used to illustrate the results. The structure of the guidelines and worksheet booklets are also presented. 5 refs., 3 tabs.

  10. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  11. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  12. Algorithm for navigated ESS.

    PubMed

    Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L

    2013-12-01

    ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically. PMID:24260766

  13. Developing dataflow algorithms

    SciTech Connect

    Hiromoto, R.E. ); Bohm, A.P.W. . Dept. of Computer Science)

    1991-01-01

    Our goal is to study the performance of a collection of numerical algorithms written in Id which is available to users of Motorola's dataflow machine Monsoon. We will study the dataflow performance of these implementations first under the parallel profiling simulator Id World, and second in comparison with actual dataflow execution on the Motorola Monsoon. This approach will allow us to follow the computational and structural details of the parallel algorithms as implemented on dataflow systems. When running our programs on the Id World simulator we will examine the behaviour of algorithms at dataflow graph level, where each instruction takes one timestep and data becomes available at the next. This implies that important machine level phenomena such as the effect that global communication time may have on the computation are not addressed. These phenomena will be addressed when we run our programs on the Monsoon hardware. Potential ramifications for compilation techniques, functional programming style, and program efficiency are significant to this study. In a later stage of our research we will compare the efficiency of Id programs to programs written in other languages. This comparison will be of a rather qualitative nature as there are too many degrees of freedom in a language implementation for a quantitative comparison to be of interest. We begin our study by examining one routine that exhibit different computational characteristics. This routine and its corresponding characteristics is Fast Fourier Transforms; computational parallelism and data dependences between the butterfly shuffles.

  14. Evaluating super resolution algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun

    2011-01-01

    This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.

  15. Automated coronary artery calcification detection on low-dose chest CT images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Cham, Matthew D.; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.

    2014-03-01

    Coronary artery calcification (CAC) measurement from low-dose CT images can be used to assess the risk of coronary artery disease. A fully automatic algorithm to detect and measure CAC from low-dose non-contrast, non-ECG-gated chest CT scans is presented. Based on the automatically detected CAC, the Agatston score (AS), mass score and volume score were computed. These were compared with scores obtained manually from standard-dose ECG-gated scans and low-dose un-gated scans of the same patient. The automatic algorithm segments the heart region based on other pre-segmented organs to provide a coronary region mask. The mitral valve and aortic valve calcification is identified and excluded. All remaining voxels greater than 180HU within the mask region are considered as CAC candidates. The heart segmentation algorithm was evaluated on 400 non-contrast cases with both low-dose and regular dose CT scans. By visual inspection, 371 (92.8%) of the segmentations were acceptable. The automated CAC detection algorithm was evaluated on 41 low-dose non-contrast CT scans. Manual markings were performed on both low-dose and standard-dose scans for these cases. Using linear regression, the correlation of the automatic AS with the standard-dose manual scores was 0.86; with the low-dose manual scores the correlation was 0.91. Standard risk categories were also computed. The automated method risk category agreed with manual markings of gated scans for 24 cases while 15 cases were 1 category off. For low-dose scans, the automatic method agreed with 33 cases while 7 cases were 1 category off.

  16. Dose specification for 192Ir high dose rate brachytherapy in terms of dose-to-water-in-medium and dose-to-medium-in-medium

    NASA Astrophysics Data System (ADS)

    Paiva Fonseca, Gabriel; Carlsson Tedgren, Åsa; Reniers, Brigitte; Nilsson, Josef; Persson, Maria; Yoriyaz, Hélio; Verhaegen, Frank

    2015-06-01

    Dose calculation in high dose rate brachytherapy with 192Ir is usually based on the TG-43U1 protocol where all media are considered to be water. Several dose calculation algorithms have been developed that are capable of handling heterogeneities with two possibilities to report dose: dose-to-medium-in-medium (Dm,m) and dose-to-water-in-medium (Dw,m). The relation between Dm,m and Dw,m for 192Ir is the main goal of this study, in particular the dependence of Dw,m on the dose calculation approach using either large cavity theory (LCT) or small cavity theory (SCT). A head and neck case was selected due to the presence of media with a large range of atomic numbers relevant to tissues and mass densities such as air, soft tissues and bone interfaces. This case was simulated using a Monte Carlo (MC) code to score: Dm,m, Dw,m (LCT), mean photon energy and photon fluence. Dw,m (SCT) was derived from MC simulations using the ratio between the unrestricted collisional stopping power of the actual medium and water. Differences between Dm,m and Dw,m (SCT or LCT) can be negligible (<1%) for some tissues e.g. muscle and significant for other tissues with differences of up to 14% for bone. Using SCT or LCT approaches leads to differences between Dw,m (SCT) and Dw,m (LCT) up to 29% for bone and 36% for teeth. The mean photon energy distribution ranges from 222 keV up to 356 keV. However, results obtained using mean photon energies are not equivalent to the ones obtained using the full, local photon spectrum. This work concludes that it is essential that brachytherapy studies clearly report the dose quantity. It further shows that while differences between Dm,m and Dw,m (SCT) mainly depend on tissue type, differences between Dm,m and Dw,m (LCT) are, in addition, significantly dependent on the local photon energy fluence spectrum which varies with distance to implanted sources.

  17. Radiation dose reduction for coronary artery calcium scoring at 320-detector CT with adaptive iterative dose reduction 3D.

    PubMed

    Tatsugami, Fuminari; Higaki, Toru; Fukumoto, Wataru; Kaichi, Yoko; Fujioka, Chikako; Kiguchi, Masao; Yamamoto, Hideya; Kihara, Yasuki; Awai, Kazuo

    2015-06-01

    To assess the possibility of reducing the radiation dose for coronary artery calcium (CAC) scoring by using adaptive iterative dose reduction 3D (AIDR 3D) on a 320-detector CT scanner. Fifty-four patients underwent routine- and low-dose CT for CAC scoring. Low-dose CT was performed at one-third of the tube current used for routine-dose CT. Routine-dose CT was reconstructed with filtered back projection (FBP) and low-dose CT was reconstructed with AIDR 3D. We compared the calculated Agatston-, volume-, and mass scores of these images. The overall percentage difference in the Agatston-, volume-, and mass scores between routine- and low-dose CT studies was 15.9, 11.6, and 12.6%, respectively. There were no significant differences in the routine- and low-dose CT studies irrespective of the scoring algorithms applied. The CAC measurements of both imaging modalities were highly correlated with respect to the Agatston- (r = 0.996), volume- (r = 0.996), and mass score (r = 0.997; p < 0.001, all); the Bland-Altman limits of agreement scores were -37.4 to 51.4, -31.2 to 36.4 and -30.3 to 40.9%, respectively, suggesting that AIDR 3D was a good alternative for FBP. The mean effective radiation dose for routine- and low-dose CT was 2.2 and 0.7 mSv, respectively. The use of AIDR 3D made it possible to reduce the radiation dose by 67% for CAC scoring without impairing the quantification of coronary calcification. PMID:25754302

  18. Recommendations for dose calculations of lung cancer treatment plans treated with stereotactic ablative body radiotherapy (SABR)

    NASA Astrophysics Data System (ADS)

    Devpura, S.; Siddiqui, M. S.; Chen, D.; Liu, D.; Li, H.; Kumar, S.; Gordon, J.; Ajlouni, M.; Movsas, B.; Chetty, I. J.

    2014-03-01

    The purpose of this study was to systematically evaluate dose distributions computed with 5 different dose algorithms for patients with lung cancers treated using stereotactic ablative body radiotherapy (SABR). Treatment plans for 133 lung cancer patients, initially computed with a 1D-pencil beam (equivalent-path-length, EPL-1D) algorithm, were recalculated with 4 other algorithms commissioned for treatment planning, including 3-D pencil-beam (EPL-3D), anisotropic analytical algorithm (AAA), collapsed cone convolution superposition (CCC), and Monte Carlo (MC). The plan prescription dose was 48 Gy in 4 fractions normalized to the 95% isodose line. Tumors were classified according to location: peripheral tumors surrounded by lung (lung-island, N=39), peripheral tumors attached to the rib-cage or chest wall (lung-wall, N=44), and centrally-located tumors (lung-central, N=50). Relative to the EPL-1D algorithm, PTV D95 and mean dose values computed with the other 4 algorithms were lowest for "lung-island" tumors with smallest field sizes (3-5 cm). On the other hand, the smallest differences were noted for lung-central tumors treated with largest field widths (7-10 cm). Amongst all locations, dose distribution differences were most strongly correlated with tumor size for lung-island tumors. For most cases, convolution/superposition and MC algorithms were in good agreement. Mean lung dose (MLD) values computed with the EPL-1D algorithm were highly correlated with that of the other algorithms (correlation coefficient =0.99). The MLD values were found to be ~10% lower for small lung-island tumors with the model-based (conv/superposition and MC) vs. the correction-based (pencil-beam) algorithms with the model-based algorithms predicting greater low dose spread within the lungs. This study suggests that pencil beam algorithms should be avoided for lung SABR planning. For the most challenging cases, small tumors surrounded entirely by lung tissue (lung-island type), a Monte

  19. Hanford Site Annual Report Radiological Dose Calculation Upgrade Evaluation

    SciTech Connect

    Snyder, Sandra F.

    2010-02-28

    Operations at the Hanford Site, Richland, Washington, result in the release of radioactive materials to offsite residents. Site authorities are required to estimate the dose to the maximally exposed offsite resident. Due to the very low levels of exposure at the residence, computer models, rather than environmental samples, are used to estimate exposure, intake, and dose. A DOS-based model has been used in the past (GENII version 1.485). GENII v1.485 has been updated to a Windows®-based software (GENII version 2.08). Use of the updated software will facilitate future dose evaluations, but must be demonstrated to provide results comparable to those of GENII v1.485. This report describes the GENII v1.485 and GENII v2.08 dose exposure, intake, and dose estimates for the maximally exposed offsite resident reported for calendar year 2008. The GENII v2.08 results reflect updates to implemented algorithms. No two environmental models produce the same results, as was again demonstrated in this report. The aggregated dose results from 2008 Hanford Site airborne and surface water exposure scenarios provide comparable dose results. Therefore, the GENII v2.08 software is recommended for future offsite resident dose evaluations.

  20. Radiation dose to physicians’ eye lens during interventional radiology

    NASA Astrophysics Data System (ADS)

    Bahruddin, N. A.; Hashim, S.; Karim, M. K. A.; Sabarudin, A.; Ang, W. C.; Salehhon, N.; Bakar, K. A.

    2016-03-01

    The demand of interventional radiology has increased, leading to significant risk of radiation where eye lens dose assessment becomes a major concern. In this study, we investigate physicians' eye lens doses during interventional procedures. Measurement were made using TLD-100 (LiF: Mg, Ti) dosimeters and was recorded in equivalent dose at a depth of 0.07 mm, Hp(0.07). Annual Hp(0.07) and annual effective dose were estimated using workload estimation for a year and Von Boetticher algorithm. Our results showed the mean Hp(0.07) dose of 0.33 mSv and 0.20 mSv for left and right eye lens respectively. The highest estimated annual eye lens dose was 29.33 mSv per year, recorded on left eye lens during fistulogram procedure. Five physicians had exceeded 20 mSv dose limit as recommended by international commission of radiological protection (ICRP). It is suggested that frequent training and education on occupational radiation exposure are necessary to increase knowledge and awareness of the physicians’ thus reducing dose during the interventional procedure.

  1. TU-F-17A-08: The Relative Accuracy of 4D Dose Accumulation for Lung Radiotherapy Using Rigid Dose Projection Versus Dose Recalculation On Every Breathing Phase

    SciTech Connect

    Lamb, J; Lee, C; Tee, S; Lee, P; Iwamoto, K; Low, D; Valdes, G; Robinson, C

    2014-06-15

    Purpose: To investigate the accuracy of 4D dose accumulation using projection of dose calculated on the end-exhalation, mid-ventilation, or average intensity breathing phase CT scan, versus dose accumulation performed using full Monte Carlo dose recalculation on every breathing phase. Methods: Radiotherapy plans were analyzed for 10 patients with stage I-II lung cancer planned using 4D-CT. SBRT plans were optimized using the dose calculated by a commercially-available Monte Carlo algorithm on the end-exhalation 4D-CT phase. 4D dose accumulations using deformable registration were performed with a commercially available tool that projected the planned dose onto every breathing phase without recalculation, as well as with a Monte Carlo recalculation of the dose on all breathing phases. The 3D planned dose (3D-EX), the 3D dose calculated on the average intensity image (3D-AVE), and the 4D accumulations of the dose calculated on the end-exhalation phase CT (4D-PR-EX), the mid-ventilation phase CT (4D-PR-MID), and the average intensity image (4D-PR-AVE), respectively, were compared against the accumulation of the Monte Carlo dose recalculated on every phase. Plan evaluation metrics relating to target volumes and critical structures relevant for lung SBRT were analyzed. Results: Plan evaluation metrics tabulated using 4D-PR-EX, 4D-PR-MID, and 4D-PR-AVE differed from those tabulated using Monte Carlo recalculation on every phase by an average of 0.14±0.70 Gy, - 0.11±0.51 Gy, and 0.00±0.62 Gy, respectively. Deviations of between 8 and 13 Gy were observed between the 4D-MC calculations and both 3D methods for the proximal bronchial trees of 3 patients. Conclusions: 4D dose accumulation using projection without re-calculation may be sufficiently accurate compared to 4D dose accumulated from Monte Carlo recalculation on every phase, depending on institutional protocols. Use of 4D dose accumulation should be considered when evaluating normal tissue complication

  2. Quality assurance for radiotherapy in prostate cancer: Point dose measurements in intensity modulated fields with large dose gradients

    SciTech Connect

    Escude, Lluis . E-mail: lluis.escude@gmx.net; Linero, Dolors; Molla, Meritxell; Miralbell, Raymond

    2006-11-15

    Purpose: We aimed to evaluate an optimization algorithm designed to find the most favorable points to position an ionization chamber (IC) for quality assurance dose measurements of patients treated for prostate cancer with intensity-modulated radiotherapy (IMRT) and fields up to 10 cm x 10 cm. Methods and Materials: Three cylindrical ICs (PTW, Freiburg, Germany) were used with volumes of 0.6 cc, 0.125 cc, and 0.015 cc. Dose measurements were made in a plastic phantom (PMMA) at 287 optimized points. An algorithm was designed to search for points with the lowest dose gradient. Measurements were made also at 39 nonoptimized points. Results were normalized to a reference homogeneous field introducing a dose ratio factor, which allowed us to compare measured vs. calculated values as percentile dose ratio factor deviations {delta}F (%). A tolerance range of {delta}F (%) of {+-}3% was considered. Results: Half of the {delta}F (%) values obtained at nonoptimized points were outside the acceptable range. Values at optimized points were widely spread for the largest IC (i.e., 60% of the results outside the tolerance range), whereas for the two small-volume ICs, only 14.6% of the results were outside the tolerance interval. No differences were observed when comparing the two small ICs. Conclusions: The presented optimization algorithm is a useful tool to determine the best IC in-field position for optimal dose measurement conditions. A good agreement between calculated and measured doses can be obtained by positioning small volume chambers at carefully selected points in the field. Large chambers may be unreliable even in optimized points for IMRT fields {<=}10 cm x 10 cm.

  3. Design of robust systolic algorithms

    SciTech Connect

    Varman, P.J.; Fussell, D.S.

    1983-01-01

    A primary reason for the susceptibility of systolic algorithms to faults is their strong dependence on the interconnection between the processors in a systolic array. A technique to transform any linear systolic algorithm into an equivalent pipelined algorithm that executes on arbitrary trees is presented. 5 references.

  4. High-performance combinatorial algorithms

    SciTech Connect

    Pinar, Ali

    2003-10-31

    Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.

  5. Multipartite entanglement in quantum algorithms

    SciTech Connect

    Bruss, D.; Macchiavello, C.

    2011-05-15

    We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyze the multipartite entanglement properties in the Deutsch-Jozsa, Grover, and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.

  6. Algorithm for Constructing Contour Plots

    NASA Technical Reports Server (NTRS)

    Johnson, W.; Silva, F.

    1984-01-01

    General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.

  7. Polynomial Algorithms for Item Matching.

    ERIC Educational Resources Information Center

    Armstrong, Ronald D.; Jones, Douglas H.

    1992-01-01

    Polynomial algorithms are presented that are used to solve selected problems in test theory, and computational results from sample problems with several hundred decision variables are provided that demonstrate the benefits of these algorithms. The algorithms are based on optimization theory in networks (graphs). (SLD)

  8. Verifying a Computer Algorithm Mathematically.

    ERIC Educational Resources Information Center

    Olson, Alton T.

    1986-01-01

    Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)

  9. Efficient iterative image reconstruction algorithm for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  10. Radiation dose rate meter

    SciTech Connect

    Kronenberg, S.; Siebentritt, C.R.

    1981-07-28

    A combined dose rate meter and charger unit therefor which does not require the use of batteries but on the other hand produces a charging potential by means of a piezoelectric cylinder which is struck by a manually triggered hammer mechanism. A tubular type electrometer is mounted in a portable housing which additionally includes a geiger-muller (Gm) counter tube and electronic circuitry coupled to the electrometer for providing multi-mode operation. In one mode of operation, an rc circuit of predetermined time constant is connected to a storage capacitor which serves as a timed power source for the gm tube, providing a measurement in terms of dose rate which is indicated by the electrometer. In another mode, the electrometer indicates individual counts.

  11. Estimation of the Dose and Dose Rate Effectiveness Factor

    NASA Technical Reports Server (NTRS)

    Chappell, L.; Cucinotta, F. A.

    2013-01-01

    Current models to estimate radiation risk use the Life Span Study (LSS) cohort that received high doses and high dose rates of radiation. Transferring risks from these high dose rates to the low doses and dose rates received by astronauts in space is a source of uncertainty in our risk calculations. The solid cancer models recommended by BEIR VII [1], UNSCEAR [2], and Preston et al [3] is fitted adequately by a linear dose response model, which implies that low doses and dose rates would be estimated the same as high doses and dose rates. However animal and cell experiments imply there should be curvature in the dose response curve for tumor induction. Furthermore animal experiments that directly compare acute to chronic exposures show lower increases in tumor induction than acute exposures. A dose and dose rate effectiveness factor (DDREF) has been estimated and applied to transfer risks from the high doses and dose rates of the LSS cohort to low doses and dose rates such as from missions in space. The BEIR VII committee [1] combined DDREF estimates using the LSS cohort and animal experiments using Bayesian methods for their recommendation for a DDREF value of 1.5 with uncertainty. We reexamined the animal data considered by BEIR VII and included more animal data and human chromosome aberration data to improve the estimate for DDREF. Several experiments chosen by BEIR VII were deemed inappropriate for application to human risk models of solid cancer risk. Animal tumor experiments performed by Ullrich et al [4], Alpen et al [5], and Grahn et al [6] were analyzed to estimate the DDREF. Human chromosome aberration experiments performed on a sample of astronauts within NASA were also available to estimate the DDREF. The LSS cohort results reported by BEIR VII were combined with the new radiobiology results using Bayesian methods.

  12. [Quality control dose calibrators].

    PubMed

    Montoza Aguado, M; Delgado García, A; Ramírez Navarro, A; Salgado García, C; Muros de Fuentes, M A; Ortega Lozano, S; Bellón Guardia, M E; Llamas Elvira, J M

    2004-01-01

    We have reviewed the legislation about the quality control of dose calibrator. The importance of verifying the correct work of these instruments, is fundamental in daily practice of radiopharmacy and nuclear medicine. The Spanish legislation establishes to include these controls as part of the quality control of radiopharmaceuticals, and the program of quality assurance in nuclear medicine. We have reviewed guides and protocols from international eminent organizations, summarizing the recommended tests and periodicity of them. PMID:15625064

  13. Dose esclation in radioimmunotherapy based on projected whole body dose

    SciTech Connect

    Wahl, R.L.; Kaminski, M.S.; Regan, D.

    1994-05-01

    A variety of approaches have been utilized in conducting phase I radioimmunotherapy dose-escalation trials. Escalation of dose has been based on graded increases in administered mCi; mCi/kg; or mCi/m2. It is also possible to escalate dose based on tracer-projected marrow, blood or whole body radiation dose. We describe our results in performing a dose-escalation trial in patients with non-Hodgkin lymphoma based on escalating administered whole-body radiation dose. The mCi dose administered was based on a patient-individualized tracer projected whole-body dose. 25 patients were entered on the study. RIT with 131 I anti-B-1 was administered to 19 patients. The administered dose was prescribed based on the projected whole body dose, determined from patient-individualized tracer studies performed prior to RIT. Whole body dose estimates were based on the assumption that the patient was an ellipsoid, with 131 antibody kinetics determined using a whole-body probe device acquiring daily conjugate views of 1 minute duration/view. Dose escalation levels proceeded with 10 cGy increments from 25 cGy whole-body and continues, now at 75 cGy. The correlation among potential methods of dose escalation and toxicity was assessed. Whole body radiation dose by probe was strongly correlated with the blood radiation dose determined from sequential blood sampling during tracer studies (r=.87). Blood radiation dose was very weakly correlated with mCi dose (r=.4) and mCi/kg (r=.45). Whole body radiation dose appeared less well-correlated with injected dose in mCi (r=.6), or mCi/kg (r=.64). Toxicity has been infrequent in these patients, but appears related to increasing whole body dose. Non-invasive determination of whole-body radiation dose by gamma probe represents a non-invasive method of estimating blood radiation dose, and thus of estimating bone marrow radiation dose.

  14. VMATc: VMAT with constant gantry speed and dose rate.

    PubMed

    Peng, Fei; Jiang, Steve B; Romeijn, H Edwin; Epelman, Marina A

    2015-04-01

    This article considers the treatment plan optimization problem for Volumetric Modulated Arc Therapy (VMAT) with constant gantry speed and dose rate (VMATc). In particular, we consider the simultaneous optimization of multi-leaf collimator leaf positions and a constant gantry speed and dose rate. We propose a heuristic framework for (approximately) solving this optimization problem that is based on hierarchical decomposition. Specifically, an iterative algorithm is used to heuristically optimize dose rate and gantry speed selection, where at every iteration a leaf position optimization subproblem is solved, also heuristically, to find a high-quality plan corresponding to a given dose rate and gantry speed. We apply our framework to clinical patient cases, and compare the resulting VMATc plans to idealized IMRT, as well as full VMAT plans. Our results suggest that VMATc is capable of producing treatment plans of comparable quality to VMAT, albeit at the expense of long computation time and generally higher total monitor units. PMID:25789937

  15. Proton dose calculation based on in-air fluence measurements

    NASA Astrophysics Data System (ADS)

    Schaffner, Barbara

    2008-03-01

    Proton dose calculation algorithms—as well as photon and electron algorithms—are usually based on configuration measurements taken in a water phantom. The exceptions to this are proton dose calculation algorithms for modulated scanning beams. There, it is usual to measure the spot profiles in air. We use the concept of in-air configuration measurements also for scattering and uniform scanning (wobbling) proton delivery techniques. The dose calculation includes a separate step for the calculation of the in-air fluence distribution per energy layer. The in-air fluence calculation is specific to the technique and—to a lesser extent—design of the treatment machine. The actual dose calculation uses the in-air fluence as input and is generic for all proton machine designs and techniques.

  16. Effect of deformable registration on the dose calculated in radiation therapy planning CT scans of lung cancer patients a)

    PubMed Central

    Cunliffe, Alexandra R.; Contee, Clay; Armato, Samuel G.; White, Bradley; Justusson, Julia; Malik, Renuka; Al-Hallaq, Hania A.

    2015-01-01

    Purpose: To characterize the effects of deformable image registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60 Gy, 2 Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pretherapy (4–75 days) CT scan and a treatment planning scan with an associated dose map were collected. To establish correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pretherapy scans then were coregistered with planning scans (and associated dose maps) using the demons deformable registration algorithm and two variants of the Fraunhofer MEVIS algorithm (“Fast” and “EMPIRE10”). Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from each of the three algorithms. The Euclidean distance between manually and automatically mapped landmark points (dE) and the absolute difference in planned dose (|ΔD|) were calculated. Using regression modeling, |ΔD| was modeled as a function of dE, dose (D), dose standard deviation (SDdose) in an eight-pixel neighborhood, and the registration algorithm used. Results: Over 1400 landmark point pairs were identified, with 58–93 (median: 84) points identified per patient. Average |ΔD| across patients was 3.5 Gy (range: 0.9–10.6 Gy). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, with an average dE across patients of 5.2 mm (compared with >7 mm for the other two algorithms). Consequently, average |ΔD| was also lowest using the Fraunhofer MEVIS EMPIRE10 algorithm. |ΔD| increased significantly as a function of dE (0.42 Gy/mm), D (0.05 Gy/Gy), SDdose (1.4 Gy/Gy), and the algorithm used (≤1 Gy). Conclusions: An average error of <4 Gy in radiation

  17. Improved multiprocessor garbage collection algorithms

    SciTech Connect

    Newman, I.A.; Stallard, R.P.; Woodward, M.C.

    1983-01-01

    Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.

  18. Efficient multicomponent fuel algorithm

    NASA Astrophysics Data System (ADS)

    Torres, D. J.; O'Rourke, P. J.; Amsden, A. A.

    2003-03-01

    We derive equations for multicomponent fuel evaporation in airborne fuel droplets and wall films, and implement the model into KIVA-3V. Temporal and spatial variations in liquid droplet composition and temperature are not modelled but solved for by discretizing the interior of the droplet in an implicit and computationally efficient way. We find that an interior discretization is necessary to correctly compute the evolution of the droplet composition. The details of the one-dimensional numerical algorithm are described. Numerical simulations of multicomponent evaporation are performed for single droplets and compared to experimental data.

  19. A new minimax algorithm

    NASA Technical Reports Server (NTRS)

    Vardi, A.

    1984-01-01

    The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.

  20. Detection of low level gaseous releases and dose evaluation from continuous gamma dose measurements using a wavelet transformation technique.

    PubMed

    Paul, Sabyasachi; Rao, D D; Sarkar, P K

    2012-11-01

    Measurement of environmental dose in the vicinity of a nuclear power plant site (Tarapur, India) is carried out continuously for the years 2007-2010 and attempts have been made to quantify the additional contributions from nuclear power plants over natural background by segregating the background fluctuations from the events due to plume passage using a non-decimated wavelet approach. A conservative estimate obtained using wavelet based analysis has shown a maximum annual dose of 38 μSv in a year at 1.6 km and 4.8 μSv at 10 km from the installation. The detected events within a year are in good agreement with the month wise wind-rose profile indicating reliability of the algorithm for proper detection of an event from the continuous dose rate measurements. The results were validated with the dispersion model dose predictions using the source term from routine monitoring data and meteorological parameters. PMID:22940411

  1. Quantification of the impact of MLC modeling and tissue heterogeneities on dynamic IMRT dose calculations

    SciTech Connect

    Mihaylov, I. B.; Lerma, F. A.; Fatyga, M.; Siebers, J. V.

    2007-04-15

    This study quantifies the dose prediction errors (DPEs) in dynamic IMRT dose calculations resulting from (a) use of an intensity matrix to estimate the multi-leaf collimator (MLC) modulated photon fluence (DPE{sub IGfluence}) instead of an explicit MLC particle transport, and (b) handling of tissue heterogeneities (DPE{sub hetero}) by superposition/convolution (SC) and pencil beam (PB) dose calculation algorithms. Monte Carlo (MC) computed doses are used as reference standards. Eighteen head-and-neck dynamic MLC IMRT treatment plans are investigated. DPEs are evaluated via comparing the dose received by 98% of the GTV (GTV D{sub 98%}), the CTV D{sub 95%}, the nodal D{sub 90%}, the cord and the brainstem D{sub 02%}, the parotid D{sub 50%}, the parotid mean dose (D{sub Mean}), and generalized equivalent uniform doses (gEUDs) for the above structures. For the MC-generated intensity grids, DPE{sub IGfluence} is within {+-}2.1% for all targets and critical structures. The SC algorithm DPE{sub hetero} is within {+-}3% for 98.3% of the indices tallied, and within {+-}3.4% for all of the tallied indices. The PB algorithm DPE{sub hetero} is within {+-}3% for 92% of the tallied indices. Statistical equivalence tests indicate that PB DPE{sub hetero} requires a {+-}3.6% interval to state equivalence with the MC standard, while the intervals are <1.5% for SC DPE{sub hetero} and DPE{sub IGfluence}. Overall, these results indicate that SC and MC IMRT dose calculations which use MC-derived intensity matrices for fluence prediction do not introduce significant dose errors compared with full Monte Carlo dose computations; however, PB algorithms may result in clinically significant dose deviations.

  2. Join-Graph Propagation Algorithms

    PubMed Central

    Mateescu, Robert; Kask, Kalev; Gogate, Vibhav; Dechter, Rina

    2010-01-01

    The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes. PMID:20740057

  3. Constructive neural network learning algorithms

    SciTech Connect

    Parekh, R.; Yang, Jihoon; Honavar, V.

    1996-12-31

    Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.

  4. Proton dose distribution measurements using a MOSFET detector with a simple dose-weighted correction method for LET effects.

    PubMed

    Kohno, Ryosuke; Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi

    2011-01-01

    We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth-dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high-bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L-shaped bolus. The dose reproducibility, angular dependence and depth-dose response were evaluated using a 190 MeV proton beam. Depth-output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose-weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L-shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors. PMID:21587191

  5. A computer simulation method for low-dose CT images by use of real high-dose images: a phantom study.

    PubMed

    Takenaga, Tomomi; Katsuragawa, Shigehiko; Goto, Makoto; Hatemura, Masahiro; Uchiyama, Yoshikazu; Shiraishi, Junji

    2016-01-01

    Practical simulations of low-dose CT images have a possibility of being helpful means for optimization of the CT exposure dose. Because current methods reported by several researchers are limited to specific vendor platforms and generally rely on raw sinogram data that are difficult to access, we have developed a new computerized scheme for producing simulated low-dose CT images from real high-dose images without use of raw sinogram data or of a particular phantom. Our computerized scheme for low-dose CT simulation was based on the addition of a simulated noise image to a real high-dose CT image reconstructed by the filtered back-projection algorithm. First, a sinogram was generated from the forward projection of a high-dose CT image. Then, an additional noise sinogram resulting from use of a reduced exposure dose was estimated from a predetermined noise model. Finally, a noise CT image was reconstructed with a predetermined filter and was added to the real high-dose CT image to create a simulated low-dose CT image. The noise power spectrum and modulation transfer function of the simulated low-dose images were very close to those of the real low-dose images. In order to confirm the feasibility of our method, we applied this method to clinical cases which were examined with the high dose initially and then followed with a low-dose CT. In conclusion, our proposed method could simulate the low-dose CT images from their real high-dose images with sufficient accuracy and could be used for determining the optimal dose setting for various clinical CT examinations. PMID:26290269

  6. Online Planning Algorithm

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  7. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  8. NEKF IMM tracking algorithm

    NASA Astrophysics Data System (ADS)

    Owen, Mark W.; Stubberud, Allen R.

    2003-12-01

    Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.

  9. NEKF IMM tracking algorithm

    NASA Astrophysics Data System (ADS)

    Owen, Mark W.; Stubberud, Allen R.

    2004-01-01

    Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.

  10. Steep dose gradients for simultaneous integrated boost IMRT.

    PubMed

    Bratengeier, Klaus; Meyer, Jürgen; Schwab, Franz; Vordermark, Dirk; Flentje, Michael

    2009-01-01

    Steep dose gradients between two planning target volumes (PTVs) as may be required for simultaneous integrated boosts (SIB) should be an option provided by IMRT algorithms. The aim was to analyse the geometry of the SIB problem and to implement the results in an algorithm for IMRT segment generation denoted two-step intensity modulated radiotherapy (2-Step IMRT). It was hypothesized that a gap between segments directed to the inner and the outer PTV would steepen the dose gradient. The mathematical relationships were derived from the individual dose levels and the geometry (diameters) of the PTVs. The results generated by means of 2-Step IMRT segments were equivalent or better than the segment generation using a commercial IMRT planning system. The dose to both the inner and the outer PTV was clearly more homogeneous and the composite objective value was the lowest. The segment numbers were lower or equal--with better sparing of the surrounding tissue. In summary, it was demonstrated that 2-Step IMRT was able to achieve steep dose gradients for SIB constellations. PMID:19678528

  11. Dose discrepancies in the buildup region and their impact on dose calculations for IMRT fields

    SciTech Connect

    Hsu, Shu-Hui; Moran, Jean M.; Chen Yu; Kulasekere, Ravi; Roberson, Peter L.

    2010-05-15

    Purpose: Dose accuracy in the buildup region for radiotherapy treatment planning suffers from challenges in both measurement and calculation. This study investigates the dosimetry in the buildup region at normal and oblique incidences for open and IMRT fields and assesses the quality of the treatment planning calculations. Methods: This study was divided into three parts. First, percent depth doses and profiles (for 5x5, 10x10, 20x20, and 30x30 cm{sup 2} field sizes at 0 deg., 45 deg., and 70 deg. incidences) were measured in the buildup region in Solid Water using an Attix parallel plate chamber and Kodak XV film, respectively. Second, the parameters in the empirical contamination (EC) term of the convolution/superposition (CVSP) calculation algorithm were fitted based on open field measurements. Finally, seven segmental head-and-neck IMRT fields were measured on a flat phantom geometry and compared to calculations using {gamma} and dose-gradient compensation (C) indices to evaluate the impact of residual discrepancies and to assess the adequacy of the contamination term for IMRT fields. Results: Local deviations between measurements and calculations for open fields were within 1% and 4% in the buildup region for normal and oblique incidences, respectively. The C index with 5%/1 mm criteria for IMRT fields ranged from 89% to 99% and from 96% to 98% at 2 mm and 10 cm depths, respectively. The quality of agreement in the buildup region for open and IMRT fields is comparable to that in nonbuildup regions. Conclusions: The added EC term in CVSP was determined to be adequate for both open and IMRT fields. Due to the dependence of calculation accuracy on (1) EC modeling, (2) internal convolution and density grid sizes, (3) implementation details in the algorithm, and (4) the accuracy of measurements used for treatment planning system commissioning, the authors recommend an evaluation of the accuracy of near-surface dose calculations as a part of treatment planning

  12. History of dose specification in Brachytherapy: From Threshold Erythema Dose to Computational Dosimetry

    NASA Astrophysics Data System (ADS)

    Williamson, Jeffrey F.

    2006-09-01

    This paper briefly reviews the evolution of brachytherapy dosimetry from 1900 to the present. Dosimetric practices in brachytherapy fall into three distinct eras: During the era of biological dosimetry (1900-1938), radium pioneers could only specify Ra-226 and Rn-222 implants in terms of the mass of radium encapsulated within the implanted sources. Due to the high energy of its emitted gamma rays and the long range of its secondary electrons in air, free-air chambers could not be used to quantify the output of Ra-226 sources in terms of exposure. Biological dosimetry, most prominently the threshold erythema dose, gained currency as a means of intercomparing radium treatments with exposure-calibrated orthovoltage x-ray units. The classical dosimetry era (1940-1980) began with successful exposure standardization of Ra-226 sources by Bragg-Gray cavity chambers. Classical dose-computation algorithms, based upon 1-D buildup factor measurements and point-source superposition computational algorithms, were able to accommodate artificial radionuclides such as Co-60, Ir-192, and Cs-137. The quantitative dosimetry era (1980- ) arose in response to the increasing utilization of low energy K-capture radionuclides such as I-125 and Pd-103 for which classical approaches could not be expected to estimate accurate correct doses. This led to intensive development of both experimental (largely TLD-100 dosimetry) and Monte Carlo dosimetry techniques along with more accurate air-kerma strength standards. As a result of extensive benchmarking and intercomparison of these different methods, single-seed low-energy radionuclide dose distributions are now known with a total uncertainty of 3%-5%.

  13. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Cannon, S.D.; Finch, S.M.

    1992-10-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates):Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  14. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1992-01-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  15. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1991-01-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): Source terms; environmental transport environmental monitoring data; demographics, agriculture, food habits; environmental pathways and dose estimates.

  16. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1992-02-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demography, food consumption, and agriculture; environmental pathways and dose estimates.

  17. Deformable Dose Reconstruction to Optimize the Planning and Delivery of Liver Cancer Radiotherapy

    NASA Astrophysics Data System (ADS)

    Velec, Michael

    The precise delivery of radiation to liver cancer patients results in improved control with higher tumor doses and minimized normal tissues doses. A margin of normal tissue around the tumor requires irradiation however to account for treatment delivery uncertainties. Daily image-guidance allows targeting of the liver, a surrogate for the tumor, to reduce geometric errors. However poor direct tumor visualization, anatomical deformation and breathing motion introduce uncertainties between the planned dose, calculated on a single pre-treatment computed tomography image, and the dose that is delivered. A novel deformable image registration algorithm based on tissue biomechanics was applied to previous liver cancer patients to track targets and surrounding organs during radiotherapy. Modeling these daily anatomic variations permitted dose accumulation, thereby improving calculations of the delivered doses. The accuracy of the algorithm to track dose was validated using imaging from a deformable, 3-dimensional dosimeter able to optically track absorbed dose. Reconstructing the delivered dose revealed that 70% of patients had substantial deviations from the initial planned dose. An alternative image-guidance technique using respiratory-correlated imaging was simulated, which reduced both the residual tumor targeting errors and the magnitude of the delivered dose deviations. A planning and delivery strategy for liver radiotherapy was then developed that minimizes the impact of breathing motion, and applied a margin to account for the impact of liver deformation during treatment. This margin is 38% smaller on average than the margin used clinically, and permitted an average dose-escalation to liver tumors of 9% for the same risk of toxicity. Simulating the delivered dose with deformable dose reconstruction demonstrated the plans with smaller margins were robust as 90% of patients' tumors received the intended dose. This strategy can be readily implemented with widely

  18. Quantification of Proton Dose Calculation Accuracy in the Lung

    SciTech Connect

    Grassberger, Clemens; Daartz, Juliane; Dowdell, Stephen; Ruggieri, Thomas; Sharp, Greg; Paganetti, Harald

    2014-06-01

    Purpose: To quantify the accuracy of a clinical proton treatment planning system (TPS) as well as Monte Carlo (MC)–based dose calculation through measurements and to assess the clinical impact in a cohort of patients with tumors located in the lung. Methods and Materials: A lung phantom and ion chamber array were used to measure the dose to a plane through a tumor embedded in the lung, and to determine the distal fall-off of the proton beam. Results were compared with TPS and MC calculations. Dose distributions in 19 patients (54 fields total) were simulated using MC and compared to the TPS algorithm. Results: MC increased dose calculation accuracy in lung tissue compared with the TPS and reproduced dose measurements in the target to within ±2%. The average difference between measured and predicted dose in a plane through the center of the target was 5.6% for the TPS and 1.6% for MC. MC recalculations in patients showed a mean dose to the clinical target volume on average 3.4% lower than the TPS, exceeding 5% for small fields. For large tumors, MC also predicted consistently higher V5 and V10 to the normal lung, because of a wider lateral penumbra, which was also observed experimentally. Critical structures located distal to the target could show large deviations, although this effect was highly patient specific. Range measurements showed that MC can reduce range uncertainty by a factor of ∼2: the average (maximum) difference to the measured range was 3.9 mm (7.5 mm) for MC and 7 mm (17 mm) for the TPS in lung tissue. Conclusion: Integration of Monte Carlo dose calculation techniques into the clinic would improve treatment quality in proton therapy for lung cancer by avoiding systematic overestimation of target dose and underestimation of dose to normal lung. In addition, the ability to confidently reduce range margins would benefit all patients by potentially lowering toxicity.

  19. Comparison of pencil-beam, collapsed-cone and Monte-Carlo algorithms in radiotherapy treatment planning for 6-MV photons

    NASA Astrophysics Data System (ADS)

    Kim, Sung Jin; Kim, Sung Kyu; Kim, Dong Ho

    2015-07-01

    Treatment planning system calculations in inhomogeneous regions may present significant inaccuracies due to loss of electronic equilibrium. In this study, three different dose calculation algorithms, pencil beam (PB), collapsed cone (CC), and Monte-Carlo (MC), provided by our planning system were compared to assess their impact on the three-dimensional planning of lung and breast cases. A total of five breast and five lung cases were calculated by using the PB, CC, and MC algorithms. Planning treatment volume (PTV) and organs at risk (OARs) delineations were performed according to our institution's protocols on the Oncentra MasterPlan image registration module, on 0.3-0.5 cm computed tomography (CT) slices taken under normal respiration conditions. Intensitymodulated radiation therapy (IMRT) plans were calculated for the three algorithm for each patient. The plans were conducted on the Oncentra MasterPlan (PB and CC) and CMS Monaco (MC) treatment planning systems for 6 MV. The plans were compared in terms of the dose distribution in target, the OAR volumes, and the monitor units (MUs). Furthermore, absolute dosimetry was measured using a three-dimensional diode array detector (ArcCHECK) to evaluate the dose differences in a homogeneous phantom. Comparing the dose distributions planned by using the PB, CC, and MC algorithms, the PB algorithm provided adequate coverage of the PTV. The MUs calculated using the PB algorithm were less than those calculated by using. The MC algorithm showed the highest accuracy in terms of the absolute dosimetry. Differences were found when comparing the calculation algorithms. The PB algorithm estimated higher doses for the target than the CC and the MC algorithms. The PB algorithm actually overestimated the dose compared with those calculated by using the CC and the MC algorithms. The MC algorithm showed better accuracy than the other algorithms.

  20. An automated fitting procedure and software for dose-response curves with multiphasic features

    PubMed Central

    Veroli, Giovanni Y. Di; Fornari, Chiara; Goldlust, Ian; Mills, Graham; Koh, Siang Boon; Bramhall, Jo L; Richards, Frances M.; Jodrell, Duncan I.

    2015-01-01

    In cancer pharmacology (and many other areas), most dose-response curves are satisfactorily described by a classical Hill equation (i.e. 4 parameters logistical). Nevertheless, there are instances where the marked presence of more than one point of inflection, or the presence of combined agonist and antagonist effects, prevents straight-forward modelling of the data via a standard Hill equation. Here we propose a modified model and automated fitting procedure to describe dose-response curves with multiphasic features. The resulting general model enables interpreting each phase of the dose-response as an independent dose-dependent process. We developed an algorithm which automatically generates and ranks dose-response models with varying degrees of multiphasic features. The algorithm was implemented in new freely available Dr Fit software (sourceforge.net/projects/drfit/). We show how our approach is successful in describing dose-response curves with multiphasic features. Additionally, we analysed a large cancer cell viability screen involving 11650 dose-response curves. Based on our algorithm, we found that 28% of cases were better described by a multiphasic model than by the Hill model. We thus provide a robust approach to fit dose-response curves with various degrees of complexity, which, together with the provided software implementation, should enable a wide audience to easily process their own data. PMID:26424192

  1. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  2. Radiochromic film based transit dosimetry for verification of dose delivery with intensity modulated radiotherapy

    SciTech Connect

    Chung, Kwangzoo; Lee, Kiho; Shin, Dongho; Kyung Lim, Young; Byeong Lee, Se; Yoon, Myonggeun; Son, Jaeman; Yong Park, Sung

    2013-02-15

    Purpose: To evaluate the transit dose based patient specific quality assurance (QA) of intensity modulated radiation therapy (IMRT) for verification of the accuracy of dose delivered to the patient. Methods: Five IMRT plans were selected and utilized to irradiate a homogeneous plastic water phantom and an inhomogeneous anthropomorphic phantom. The transit dose distribution was measured with radiochromic film and was compared with the computed dose map on the same plane using a gamma index with a 3% dose and a 3 mm distance-to-dose agreement tolerance limit. Results: While the average gamma index for comparisons of dose distributions was less than one for 98.9% of all pixels from the transit dose with the homogeneous phantom, the passing rate was reduced to 95.0% for the transit dose with the inhomogeneous phantom. Transit doses due to a 5 mm setup error may cause up to a 50% failure rate of the gamma index. Conclusions: Transit dose based IMRT QA may be superior to the traditional QA method since the former can show whether the inhomogeneity correction algorithm from TPS is accurate. In addition, transit dose based IMRT QA can be used to verify the accuracy of the dose delivered to the patient during treatment by revealing significant increases in the failure rate of the gamma index resulting from errors in patient positioning during treatment.

  3. A Bayesian Dose-finding Design for Oncology Clinical Trials of Combinational Biological Agents

    PubMed Central

    Cai, Chunyan; Yuan, Ying; Ji, Yuan

    2013-01-01

    Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which efficacy and toxicity monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a dose-finding design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. PMID:24511160

  4. A Bayesian Dose-finding Design for Oncology Clinical Trials of Combinational Biological Agents.

    PubMed

    Cai, Chunyan; Yuan, Ying; Ji, Yuan

    2014-01-01

    Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which efficacy and toxicity monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a dose-finding design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. PMID:24511160

  5. Efficient Kriging Algorithms

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2011-01-01

    More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.

  6. Fighting Censorship with Algorithms

    NASA Astrophysics Data System (ADS)

    Mahdian, Mohammad

    In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.

  7. Trial encoding algorithms ensemble.

    PubMed

    Cheng, Lipin Bill; Yeh, Ren Jye

    2013-01-01

    This paper proposes trial algorithms for some basic components in cryptography and lossless bit compression. The symmetric encryption is accomplished by mixing up randomizations and scrambling with hashing of the key playing an essential role. The digital signature is adapted from the Hill cipher with the verification key matrices incorporating un-invertible parts to hide the signature matrix. The hash is a straight running summation (addition chain) of data bytes plus some randomization. One simplified version can be burst error correcting code. The lossless bit compressor is the Shannon-Fano coding that is less optimal than the later Huffman and Arithmetic coding, but can be conveniently implemented without the use of a tree structure and improvable with bytes concatenation. PMID:27057475

  8. Dose differences in intensity-modulated radiotherapy plans calculated with pencil beam and Monte Carlo for lung SBRT.

    PubMed

    Liu, Han; Zhuang, Tingliang; Stephans, Kevin; Videtic, Gregory; Raithel, Stephen; Djemil, Toufik; Xia, Ping

    2015-01-01

    For patients with medically inoperable early-stage non-small cell lung cancer (NSCLC) treated with stereotactic body radiation therapy, early treatment plans were based on a simpler dose calculation algorithm, the pencil beam (PB) calculation. Because these patients had the longest treatment follow-up, identifying dose differences between the PB calculated dose and Monte Carlo calculated dose is clinically important for understanding of treatment outcomes. Previous studies found significant dose differences between the PB dose calculation and more accurate dose calculation algorithms, such as convolution-based or Monte Carlo (MC), mostly for three-dimensional conformal radiotherapy (3D CRT) plans. The aim of this study is to investigate whether these observed dose differences also exist for intensity-modulated radiotherapy (IMRT) plans for both centrally and peripherally located tumors. Seventy patients (35 central and 35 peripheral) were retrospectively selected for this study. The clinical IMRT plans that were initially calculated with the PB algorithm were recalculated with the MC algorithm. Among these paired plans, dosimetric parameters were compared for the targets and critical organs. When compared to MC calculation, PB calculation overestimated doses to the planning target volumes (PTVs) of central and peripheral tumors with different magnitudes. The doses to 95% of the central and peripheral PTVs were overestimated by 9.7% ± 5.6% and 12.0% ± 7.3%, respectively. This dose overestimation did not affect doses to the critical organs, such as the spinal cord and lung. In conclusion, for NSCLC treated with IMRT, dose differences between the PB and MC calculations were different from that of 3D CRT. No significant dose differences in critical organs were observed between the two calculations. PMID:26699560

  9. Multisensor data fusion algorithm development

    SciTech Connect

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  10. Measurement of vocal doses in speech: experimental procedure and signal processing.

    PubMed

    Svec, Jan G; Popolo, Peter S; Titze, Ingo R

    2003-01-01

    An experimental method for quantifying the amount of voicing over time is described in a tutorial manner. A new procedure for obtaining calibrated sound pressure levels (SPL) of speech from a head-mounted microphone is offered. An algorithm for voicing detection (kv) and fundamental frequency (F0) extraction from an electroglottographic signal is described. The extracted values of SPL, F0, and kv are used to derive five vocal doses: the time dose (total voicing time), the cycle dose (total number of vocal fold oscillatory cycles), the distance dose (total distance travelled by the vocal folds in an oscillatory path), the energy dissipation dose (total amount of heat energy dissipated in the vocal folds) and the radiated energy dose (total acoustic energy radiated from the mouth). The doses measure the vocal load and can be used for studying the effects of vocal fold tissue exposure to vibration. PMID:14686546

  11. Percentage depth dose evaluation in heterogeneous media using thermoluminescent dosimetry.

    PubMed

    da Rosa, L A R; Cardoso, S C; Campos, L T; Alves, V G L; Batista, D V S; Facure, A

    2010-01-01

    The purpose of this study is to investigate the influence of lung heterogeneity inside a soft tissue phantom on percentage depth dose (PDD). PDD curves were obtained experimentally using LiF:Mg,Ti (TLD-100) thermoluminescent detectors and applying Eclipse treatment planning system algorithms Batho, modified Batho (M-Batho or BMod), equivalent TAR (E-TAR or EQTAR), and anisotropic analytical algorithm (AAA) for a 15 MV photon beam and field sizes of 1 x 1, 2 x 2, 5 x 5, and 10 x 10 cm 2 . Monte Carlo simulations were performed using the DOSRZnrc user code of EGSnrc. The experimental results agree with Monte Carlo simulations for all irradiation field sizes. Comparisons with Monte Carlo calculations show that the AAA algorithm provides the best simulations of PDD curves for all field sizes investigated. However, even this algorithm cannot accurately predict PDD values in the lung for field sizes of 1 x 1 and 2 x 2 cm 2 . An overdosage in the lung of about 40% and 20% is calculated by the AAA algorithm close to the interface soft tissue/lung for 1 x 1 and 2 x 2 cm 2 field sizes, respectively. It was demonstrated that differences of 100% between Monte Carlo results and the algorithms Batho, modified Batho, and equivalent TAR responses may exist inside the lung region for the 1 x 1 cm 2 field. PMID:20160687

  12. Characterisation of mega-voltage electron pencil beam dose distributions: viability of a measurement-based approach.

    PubMed

    Barnes, M P; Ebert, M A

    2008-03-01

    The concept of electron pencil-beam dose distributions is central to pencil-beam algorithms used in electron beam radiotherapy treatment planning. The Hogstrom algorithm, which is a common algorithm for electron treatment planning, models large electron field dose distributions by the superposition of a series of pencil beam dose distributions. This means that the accurate characterisation of an electron pencil beam is essential for the accuracy of the dose algorithm. The aim of this study was to evaluate a measurement based approach for obtaining electron pencil-beam dose distributions. The primary incentive for the study was the accurate calculation of dose distributions for narrow fields as traditional electron algorithms are generally inaccurate for such geometries. Kodak X-Omat radiographic film was used in a solid water phantom to measure the dose distribution of circular 12 MeV beams from a Varian 21EX linear accelerator. Measurements were made for beams of diameter, 1.5, 2, 4, 8, 16 and 32 mm. A blocked-field technique was used to subtract photon contamination in the beam. The "error function" derived from Fermi-Eyges Multiple Coulomb Scattering (MCS) theory for corresponding square fields was used to fit resulting dose distributions so that extrapolation down to a pencil beam distribution could be made. The Monte Carlo codes, BEAM and EGSnrc were used to simulate the experimental arrangement. The 8 mm beam dose distribution was also measured with TLD-100 microcubes. Agreement between film, TLD and Monte Carlo simulation results were found to be consistent with the spatial resolution used. The study has shown that it is possible to extrapolate narrow electron beam dose distributions down to a pencil beam dose distribution using the error function. However, due to experimental uncertainties and measurement difficulties, Monte Carlo is recommended as the method of choice for characterising electron pencil-beam dose distributions. PMID:18488959

  13. Dose Calibration of the ISS-RAD Fast Neutron Detector

    NASA Technical Reports Server (NTRS)

    Zeitlin, C.

    2015-01-01

    The ISS-RAD instrument has been fabricated by Southwest Research Institute and delivered to NASA for flight to the ISS in late 2015 or early 2016. ISS-RAD is essentially two instruments that share a common interface to ISS. The two instruments are the Charged Particle Detector (CPD), which is very similar to the MSL-RAD detector on Mars, and the Fast Neutron Detector (FND), which is a boron-loaded plastic scintillator with readout optimized for the 0.5 to 10 MeV energy range. As the FND is completely new, it has been necessary to develop methodology to allow it to be used to measure the neutron dose and dose equivalent. This talk will focus on the methods developed and their implementation using calibration data obtained in quasi-monoenergetic (QMN) neutron fields at the PTB facility in Braunschweig, Germany. The QMN data allow us to determine an approximate response function, from which we estimate dose and dose equivalent contributions per detected neutron as a function of the pulse height. We refer to these as the "pSv per count" curves for dose equivalent and the "pGy per count" curves for dose. The FND is required to provide a dose equivalent measurement with an accuracy of ?10% of the known value in a calibrated AmBe field. Four variants of the analysis method were developed, corresponding to two different approximations of the pSv per count curve, and two different implementations, one for real-time analysis onboard ISS and one for ground analysis. We will show that the preferred method, when applied in either real-time or ground analysis, yields good accuracy for the AmBe field. We find that the real-time algorithm is more susceptible to chance-coincidence background than is the algorithm used in ground analysis, so that the best estimates will come from the latter.

  14. Ozone Uncertainties Study Algorithm (OUSA)

    NASA Technical Reports Server (NTRS)

    Bahethi, O. P.

    1982-01-01

    An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).

  15. Ozone Uncertainties Study Algorithm (OUSA)

    NASA Astrophysics Data System (ADS)

    Bahethi, O. P.

    An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).

  16. A phantom study on the behavior of Acuros XB algorithm in flattening filter free photon beams

    PubMed Central

    Muralidhar, K. R.; Pangam, Suresh; Srinivas, P.; Athar Ali, Mirza; Priya, V. Sujana; Komanduri, Krishna

    2015-01-01

    To study the behavior of Acuros XB algorithm for flattening filter free (FFF) photon beams in comparison with the anisotropic analytical algorithm (AAA) when applied to homogeneous and heterogeneous phantoms in conventional and RapidArc techniques. Acuros XB (Eclipse version 10.0, Varian Medical Systems, CA, USA) and AAA algorithms were used to calculate dose distributions for both 6X FFF and 10X FFF energies. RapidArc plans were created on Catphan phantom 504 and conventional plans on virtual homogeneous water phantom 30 × 30 × 30 cm3, virtual heterogeneous phantom with various inserts and on solid water phantom with air cavity. Dose at various inserts with different densities were measured in both AAA and Acuros algorithms. The maximum % variation in dose was observed in (−944 HU) air insert and minimum in (85 HU) acrylic insert in both 6X FFF and 10X FFF photons. Less than 1% variation observed between −149 HU and 282 HU for both energies. At −40 HU and 765 HU Acuros behaved quite contrarily with 10X FFF. Maximum % variation in dose was observed in less HU values and minimum variation in higher HU values for both FFF energies. Global maximum dose observed at higher depths for Acuros for both energies compared with AAA. Increase in dose was observed with Acuros algorithm in almost all densities and decrease at few densities ranging from 282 to 643 HU values. Field size, depth, beam energy, and material density influenced the dose difference between two algorithms. PMID:26500400

  17. Dual integral glow analysis-evaluation of the method in determination of shallow dose and deep dose in selected beta radiation fields

    SciTech Connect

    Wagner, E.C.; Samei, E.; Kearfott, K.J.

    1996-06-01

    Since introduction of the shallow dose and deep dose by the International Commission on Radiation Units and Measurement (ICRU) in 1985, many efforts have been made to measure these quantities. In an earlier study, we introduced a new method, termed Dual Integral Glow Analysis (DINGA), for evaluation of these quantities. The method is based on obtaining the integrals of the glow curves from opposite sides of a hot gas-heated single or pair of thermoluminescent dosimeters (TLD). In this study, we demonstrate the feasibility of DINGA in determination of the shallow dose and deep dose in selected beta radiation fields using a computational algorithm. The depth-dose distributions in the TLDs are computed for 19 well-defined beta radiation fields using the EGS4 Monte Carlo radiation transport code. Using a mathematical description of depth-dose distribution in the TLD and given the thermophysical and optical parameters of the system, the Randall-Wilkins TL model is used in a computational routine to reconstruct the depth-dose distribution from which the deep dose and shallow dose are evaluated. Introducing a 5% fluctuation in response of the TL elements, the error in the computed deep dose and shallow dose is within {+-}20% for examined beta radiation fields.

  18. Solar Occultation Retrieval Algorithm Development

    NASA Technical Reports Server (NTRS)

    Lumpe, Jerry D.

    2004-01-01

    This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.

  19. Messy genetic algorithms: Recent developments

    SciTech Connect

    Kargupta, H.

    1996-09-01

    Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.

  20. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  1. Preconditioned quantum linear system algorithm.

    PubMed

    Clader, B D; Jacobs, B C; Sprouse, C R

    2013-06-21

    We describe a quantum algorithm that generalizes the quantum linear system algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] to arbitrary problem specifications. We develop a state preparation routine that can initialize generic states, show how simple ancilla measurements can be used to calculate many quantities of interest, and integrate a quantum-compatible preconditioner that greatly expands the number of problems that can achieve exponential speedup over classical linear systems solvers. To demonstrate the algorithm's applicability, we show how it can be used to compute the electromagnetic scattering cross section of an arbitrary target exponentially faster than the best classical algorithm. PMID:23829722

  2. Variable Selection using MM Algorithms

    PubMed Central

    Hunter, David R.; Li, Runze

    2009-01-01

    Variable selection is fundamental to high-dimensional statistical modeling. Many variable selection techniques may be implemented by maximum penalized likelihood using various penalty functions. Optimizing the penalized likelihood function is often challenging because it may be nondifferentiable and/or nonconcave. This article proposes a new class of algorithms for finding a maximizer of the penalized likelihood for a broad class of penalty functions. These algorithms operate by perturbing the penalty function slightly to render it differentiable, then optimizing this differentiable function using a minorize-maximize (MM) algorithm. MM algorithms are useful extensions of the well-known class of EM algorithms, a fact that allows us to analyze the local and global convergence of the proposed algorithm using some of the techniques employed for EM algorithms. In particular, we prove that when our MM algorithms converge, they must converge to a desirable point; we also discuss conditions under which this convergence may be guaranteed. We exploit the Newton-Raphson-like aspect of these algorithms to propose a sandwich estimator for the standard errors of the estimators. Our method performs well in numerical tests. PMID:19458786

  3. Improved calibration of mass stopping power in low density tissue for a proton pencil beam algorithm

    NASA Astrophysics Data System (ADS)

    Warren, Daniel R.; Partridge, Mike; Hill, Mark A.; Peach, Ken

    2015-06-01

    Dose distributions for proton therapy treatments are almost exclusively calculated using pencil beam algorithms. An essential input to these algorithms is the patient model, derived from x-ray computed tomography (CT), which is used to estimate proton stopping power along the pencil beam paths. This study highlights a potential inaccuracy in the mapping between mass density and proton stopping power used by a clinical pencil beam algorithm in materials less dense than water. It proposes an alternative physically-motivated function (the mass average, or MA, formula) for use in this region. Comparisons are made between dose-depth curves calculated by the pencil beam method and those calculated by the Monte Carlo particle transport code MCNPX in a one-dimensional lung model. Proton range differences of up to 3% are observed between the methods, reduced to  <1% when using the MA function. The impact of these range errors on clinical dose distributions is demonstrated using treatment plans for a non-small cell lung cancer patient. The change in stopping power calculation methodology results in relatively minor differences in dose when plans use three fields, but differences are observed at the 2%-2 mm level when a single field uniform dose technique is adopted. It is therefore suggested that the MA formula is adopted by users of the pencil beam algorithm for optimal dose calculation in lung, and that a similar approach is considered when beams traverse other low density regions such as the paranasal sinuses and mastoid process.

  4. Comparison of different dose reduction system in computed tomography for orthodontic applications

    PubMed Central

    FANUCCI, E.; FIASCHETTI, V.; OTTRIA, L.; MATALONI, M; ACAMPORA, V.; LIONE, R.; BARLATTANI, A.; SIMONETTI, G.

    2011-01-01

    SUMMARY To correlate different CT system: MSCT (multislice computed tomography) with different acquisition parameters (100KV, 80KV), different reconstruction algorithm (ASIR) and CBCT (cone beam computed tomography) examination in terms of absorbed X-ray dose and diagnostic accuracy. 80 KV protocols compared with 100 KV protocols resulted in reduced total radiation dose without relevant loss of diagnostic image information and quality. CBCT protocols compared with 80 KV MSCT protocols resulted in reduced total radiation dose but loss of diagnostic image information and quality although no so relevant. In addition the new system applies to equipment ASIR applicable on MSCT allows 50% of the dose without compromising image quality. PMID:23285397

  5. Integral dose conservation in radiotherapy.

    PubMed

    Reese, Adam S; Das, Shiva K; Curie, Charles; Marks, Lawrence B

    2009-03-01

    Treatment planners frequently modify beam arrangements and use IMRT to improve target dose coverage while satisfying dose constraints on normal tissues. The authors herein analyze the limitations of these strategies and quantitatively assess the extent to which dose can be redistributed within the patient volume. Specifically, the authors hypothesize that (1) the normalized integral dose is constant across concentric shells of normal tissue surrounding the target (normalized to the average integral shell dose), (2) the normalized integral shell dose is constant across plans with different numbers and orientations of beams, and (3) the normalized integral shell dose is constant across plans when reducing the dose to a critical structure. Using the images of seven patients previously irradiated for cancer of brain or prostate cancer and one idealized scenario, competing three-dimensional conformal and IMRT plans were generated using different beam configurations. Within a given plan and for competing plans with a constant mean target dose, the normalized integral doses within concentric "shells" of surrounding normal tissue were quantitatively compared. Within each patient, the normalized integral dose to shells of normal tissue surrounding the target was relatively constant (1). Similarly, for each clinical scenario, the normalized integral dose for a given shell was also relatively constant regardless of the number and orientation of beams (2) or degree of sparing of a critical structure (3). 3D and IMRT planning tools can redistribute, rather than eliminate dose to the surrounding normal tissues (intuitively known by planners). More specifically, dose cannot be moved between shells surrounding the target but only within a shell. This implies that there are limitations in the extent to which a critical structure can be spared based on the location and geometry of the critical structure relative to the target. PMID:19378734

  6. Algorithm for dosimetry of multiarc linear-accelerator stereotactic radiosurgery

    SciTech Connect

    Luxton, G.; Jozsef, G.; Astrahan, M.A. )

    1991-11-01

    Treatment planning for multiarc radiosurgery is an inherently complex three-dimensional dosimetry problem. Characteristics of small-field x-ray beams suggest that major simplification of the dose computation algorithm is possible without significant loss of accuracy compared to calculations based on large-field algorithms. The simplification makes it practical to efficiently implement accurate multiplanar dosimetry calculations on a desktop computer. An algorithm is described that is based on data from fixed-beam tissue-maximum-ratio (TMR) and profile measurements at isocenter. The profile for each fixed beam is scaled geometrically according to distance from the x-ray source. Beam broadening due to scatter is taken into account by a simple formula that interpolates the full width at half maximum (FWHM) between profiles at isocenter at different depths in phantom. TMR and profile data for two representative small-field collimators (10- and 25-mm projected diameter) were obtained by TLD and film measurements in a phantom. The accuracy of the calculational method and the associated computer program were verified by TLD and film measurements of noncoplanar multiarc irradiations from these collimators on a 4-MV linear accelerator. Comparison of film measurements in two orthogonal planes showed close agreement with calculations in the shape of the dose distribution. Maximal separation of measured and calculated 90%, 80%, and 50% isodose curves was {le}0.5 mm for all planes and collimators. All TLD and film measurements of dose to isocenter agreed with calculations to within 2%.

  7. The effects of anatomic resolution, respiratory variations and dose calculation methods on lung dosimetry

    NASA Astrophysics Data System (ADS)

    Babcock, Kerry Kent Ronald

    2009-04-01

    The goal of this thesis was to explore the effects of dose resolution, respiratory variation and dose calculation method on dose accuracy. To achieve this, two models of lung were created. The first model, called TISSUE, approximated the connective alveolar tissues of the lung. The second model, called BRANCH, approximated the lungs bronchial, arterial and venous branching networks. Both models were varied to represent the full inhalation, full exhalation and midbreath phases of the respiration cycle. To explore the effects of dose resolution and respiratory variation on dose accuracy, each model was converted into a CT dataset and imported into a Monte Carlo simulation. The resulting dose distributions were compared and contrasted against dose distributions from Monte Carlo simulations which included the explicit model geometries. It was concluded that, regardless of respiratory phase, the exclusion of the connective tissue structures in the CT representation did not significantly effect the accuracy of dose calculations. However, the exclusion of the BRANCH structures resulted in dose underestimations as high as 14% local to the branching structures. As lung density decreased, the overall dose accuracy marginally decreased. To explore the effects of dose calculation method on dose accuracy, CT representations of the lung models were imported into the Pinnacle 3 treatment planning system. Dose distributions were calculated using the collapsed cone convolution method and compared to those derived using the Monte Carlo method. For both lung models, it was concluded that the accuracy of the collapsed cone algorithm decreased with decreasing density. At full inhalation lung density, the collapsed cone algorithm underestimated dose by as much as 15%. Also, the accuracy of the CCC method decreased with decreasing field size. Further work is needed to determine the source of the discrepancy.

  8. Standardized radiological dose evaluations

    SciTech Connect

    Peterson, V.L.; Stahlnecker, E.

    1996-05-01

    Following the end of the Cold War, the mission of Rocky Flats Environmental Technology Site changed from production of nuclear weapons to cleanup. Authorization baseis documents for the facilities, primarily the Final Safety Analysis Reports, are being replaced with new ones in which accident scenarios are sorted into coarse bins of consequence and frequency, similar to the approach of DOE-STD-3011-94. Because this binning does not require high precision, a standardized approach for radiological dose evaluations is taken for all the facilities at the site. This is done through a standard calculation ``template`` for use by all safety analysts preparing the new documents. This report describes this template and its use.

  9. [Fixed-dose combination].

    PubMed

    Nagai, Yoshio

    2015-03-01

    Many patients with type 2 diabetes mellitus(T2DM) do not achieve satisfactory glycemic control by monotherapy alone, and often require multiple oral hypoglycemic agents (OHAs). Combining OHAs with complementary mechanisms of action is fundamental to the management of T2DM. Fixed-dose combination therapy(FDC) offers a method of simplifying complex regimens. Efficacy and tolerability appear to be similar between FDC and treatment with individual agents. In addition, FDC can enhance adherence and improved adherence may result in improved glycemic control. Four FDC agents are available in Japan: pioglitazone-glimepiride, pioglitazone-metformin, pioglitazone-alogliptin, and voglibose-mitiglinide. In this review, the advantages and disadvantages of these four combinations are identified and discussed. PMID:25812374

  10. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Tang, Jie; Nett, Brian E.; Chen, Guang-Hong

    2009-10-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  11. Comparison of RTPS and Monte Carlo dose distributions in heterogeneous phantoms for photon beams.

    PubMed

    Nakaguchi, Yuji; Araki, Fujio; Maruyama, Masato; Fukuda, Shogo

    2010-04-20

    The purpose of this study was to compare dose distributions from three different RTPS with those from Monte Carlo (MC) calculations and measurements, in heterogeneous phantoms for photon beams. This study used four algorithms for RTPS: AAA (analytical anisotropic algorithm) implemented in the Eclipse (Varian Medical Systems) treatment planning system, CC (collapsed cone) superposition from the Pinnacle (Philips), and MGS (multigrid superposition) and FFT (fast Fourier transform) convolution from XiO (CMS). The dose distributions from these algorithms were compared with those from MC and measurements in a set of heterogeneous phantoms. Eclipse/AAA underestimated the dose inside the lung region for low energies of 4 and 6 MV. This is because Eclipse/AAA do not adequately account for a scaling of the spread of the pencil (lateral electron transport) based on changes in the electron density at low photon energies. The dose distributions from Pinnacle/CC and XiO/MGS almost agree with those of MC and measurements at low photon energies, but increase errors at high energy of 15 MV, especially for a small field of 3x3 cm(2). The FFT convolution extremely overestimated the dose inside the lung slab compared to MC. The dose distributions from the superposition algorithms almost agree with those from MC as well as measured values at 4 and 6 MV. The dose errors for Eclipse/AAA are lager in lung model phantoms for 4 and 6 MV. It is necessary to use the algorithms comparable to superposition for accuracy of dose calculations in heterogeneous regions. PMID:20625219

  12. A convolution-superposition dose calculation engine for GPUs

    SciTech Connect

    Hissoiny, Sami; Ozell, Benoit; Despres, Philippe

    2010-03-15

    Purpose: Graphic processing units (GPUs) are increasingly used for scientific applications, where their parallel architecture and unprecedented computing power density can be exploited to accelerate calculations. In this paper, a new GPU implementation of a convolution/superposition (CS) algorithm is presented. Methods: This new GPU implementation has been designed from the ground-up to use the graphics card's strengths and to avoid its weaknesses. The CS GPU algorithm takes into account beam hardening, off-axis softening, kernel tilting, and relies heavily on raytracing through patient imaging data. Implementation details are reported as well as a multi-GPU solution. Results: An overall single-GPU acceleration factor of 908x was achieved when compared to a nonoptimized version of the CS algorithm implemented in PlanUNC in single threaded central processing unit (CPU) mode, resulting in approximatively 2.8 s per beam for a 3D dose computation on a 0.4 cm grid. A comparison to an established commercial system leads to an acceleration factor of approximately 29x or 0.58 versus 16.6 s per beam in single threaded mode. An acceleration factor of 46x has been obtained for the total energy released per mass (TERMA) calculation and a 943x acceleration factor for the CS calculation compared to PlanUNC. Dose distributions also have been obtained for a simple water-lung phantom to verify that the implementation gives accurate results. Conclusions: These results suggest that GPUs are an attractive solution for radiation therapy applications and that careful design, taking the GPU architecture into account, is critical in obtaining significant acceleration factors. These results potentially can have a significant impact on complex dose delivery techniques requiring intensive dose calculations such as intensity-modulated radiation therapy (IMRT) and arc therapy. They also are relevant for adaptive radiation therapy where dose results must be obtained rapidly.

  13. Research on Routing Selection Algorithm Based on Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna

    The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.

  14. SU-F-19A-10: Recalculation and Reporting Clinical HDR 192-Ir Head and Neck Dose Distributions Using Model Based Dose Calculation

    SciTech Connect

    Carlsson Tedgren, A; Persson, M; Nilsson, J

    2014-06-15

    Purpose: To retrospectively re-calculate dose distributions for selected head and neck cancer patients, earlier treated with HDR 192Ir brachytherapy, using Monte Carlo (MC) simulations and compare results to distributions from the planning system derived using TG43 formalism. To study differences between dose to medium (as obtained with the MC code) and dose to water in medium as obtained through (1) ratios of stopping powers and (2) ratios of mass energy absorption coefficients between water and medium. Methods: The MC code Algebra was used to calculate dose distributions according to earlier actual treatment plans using anonymized plan data and CT images in DICOM format. Ratios of stopping power and mass energy absorption coefficients for water with various media obtained from 192-Ir spectra were used in toggling between dose to water and dose to media. Results: Differences between initial planned TG43 dose distributions and the doses to media calculated by MC are insignificant in the target volume. Differences are moderate (within 4–5 % at distances of 3–4 cm) but increase with distance and are most notable in bone and at the patient surface. Differences between dose to water and dose to medium are within 1-2% when using mass energy absorption coefficients to toggle between the two quantities but increase to above 10% for bone using stopping power ratios. Conclusion: MC predicts target doses for head and neck cancer patients in close agreement with TG43. MC yields improved dose estimations outside the target where a larger fraction of dose is from scattered photons. It is important with awareness and a clear reporting of absorbed dose values in using model based algorithms. Differences in bone media can exceed 10% depending on how dose to water in medium is defined.

  15. Gradient maintenance: A new algorithm for fast online replanning

    SciTech Connect

    Ahunbay, Ergun E. Li, X. Allen

    2015-06-15

    Purpose: Clinical use of online adaptive replanning has been hampered by the unpractically long time required to delineate volumes based on the image of the day. The authors propose a new replanning algorithm, named gradient maintenance (GM), which does not require the delineation of organs at risk (OARs), and can enhance automation, drastically reducing planning time and improving consistency and throughput of online replanning. Methods: The proposed GM algorithm is based on the hypothesis that if the dose gradient toward each OAR in daily anatomy can be maintained the same as that in the original plan, the intended plan quality of the original plan would be preserved in the adaptive plan. The algorithm requires a series of partial concentric rings (PCRs) to be automatically generated around the target toward each OAR on the planning and the daily images. The PCRs are used in the daily optimization objective function. The PCR dose constraints are generated with dose–volume data extracted from the original plan. To demonstrate this idea, GM plans generated using daily images acquired using an in-room CT were compared to regular optimization and image guided radiation therapy repositioning plans for representative prostate and pancreatic cancer cases. Results: The adaptive replanning using the GM algorithm, requiring only the target contour from the CT of the day, can be completed within 5 min without using high-power hardware. The obtained adaptive plans were almost as good as the regular optimization plans and were better than the repositioning plans for the cases studied. Conclusions: The newly proposed GM replanning algorithm, requiring only target delineation, not full delineation of OARs, substantially increased planning speed for online adaptive replanning. The preliminary results indicate that the GM algorithm may be a solution to improve the ability for automation and may be especially suitable for sites with small-to-medium size targets surrounded by

  16. TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations

    SciTech Connect

    Schuemann, J; Grassberger, C; Paganetti, H; Dowdell, S

    2014-06-15

    Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50) were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend

  17. From AAA to Acuros XB-clinical implications of selecting either Acuros XB dose-to-water or dose-to-medium.

    PubMed

    Zifodya, Jackson M; Challens, Cameron H C; Hsieh, Wen-Long

    2016-06-01

    When implementing Acuros XB (AXB) as a substitute for anisotropic analytic algorithm (AAA) in the Eclipse Treatment Planning System, one is faced with a dilemma of reporting either dose to medium, AXB-Dm or dose to water, AXB-Dw. To assist with decision making on selecting either AXB-Dm or AXB-Dw for dose reporting, a retrospective study of treated patients for head & neck (H&N), prostate, breast and lung is presented. Ten patients, previously treated using AAA plans, were selected for each site and re-planned with AXB-Dm and AXB-Dw. Re-planning was done with fixed monitor units (MU) as well as non-fixed MUs. Dose volume histograms (DVH) of targets and organs at risk (OAR), were analyzed in conjunction with ICRU-83 recommended dose reporting metrics. Additionally, comparisons of plan homogeneity indices (HI) and MUs were done to further highlight the differences between the algorithms. Results showed that, on average AAA overestimated dose to the target volume and OARs by less than 2.0 %. Comparisons between AXB-Dw and AXB-Dm, for all sites, also showed overall dose differences to be small (<1.5 %). However, in non-water biological media, dose differences between AXB-Dw and AXB-Dm, as large as 4.6 % were observed. AXB-Dw also tended to have unexpectedly high 3D maximum dose values (>135 % of prescription dose) for target volumes with high density materials. Homogeneity indices showed that AAA planning and optimization templates would need to be adjusted only for the H&N and Lung sites. MU comparison showed insignificant differences between AXB-Dw relative to AAA and between AXB-Dw relative to AXB-Dm. However AXB-Dm MUs relative to AAA, showed an average difference of about 1.3 % signifying an underdosage by AAA. In conclusion, when dose is reported as AXB-Dw, the effect that high density structures in the PTV has on the dose distribution should be carefully considered. As the results show overall small dose differences between the algorithms, when

  18. Quantitative comparison of dose distribution in radiotherapy plans using 2D gamma maps and X-ray computed tomography

    PubMed Central

    Balosso, Jacques

    2016-01-01

    Background The advanced dose calculation algorithms implemented in treatment planning system (TPS) have remarkably improved the accuracy of dose calculation especially the modeling of electrons transport in the low density medium. The purpose of this study is to evaluate the use of 2D gamma (γ) index to quantify and evaluate the impact of the calculation of electrons transport on dose distribution for lung radiotherapy. Methods X-ray computed tomography images were used to calculate the dose for twelve radiotherapy treatment plans. The doses were originally calculated with Modified Batho (MB) 1D density correction method, and recalculated with anisotropic analytical algorithm (AAA), using the same prescribed dose. Dose parameters derived from dose volume histograms (DVH) and target coverage indices were compared. To compare dose distribution, 2D γ-index was applied, ranging from 1%/1 mm to 6%/6 mm. The results were displayed using γ-maps in 2D. Correlation between DVH metrics and γ passing rates was tested using Spearman’s rank test and Wilcoxon paired test to calculate P values. Results the plans generated with AAA predicted more heterogeneous dose distribution inside the target, with P<0.05. However, MB overestimated the dose predicting more coverage of the target by the prescribed dose. The γ analysis showed that the difference between MB and AAA could reach up to ±10%. The 2D γ-maps illustrated that AAA predicted more dose to organs at risks, as well as lower dose to the target compared to MB. Conclusions Taking into account of the electrons transport on radiotherapy plans showed a significant impact on delivered dose and dose distribution. When considering the AAA represent the true cumulative dose, a readjusting of the prescribed dose and an optimization to protect the organs at risks should be taken in consideration in order to obtain the better clinical outcome. PMID:27429908

  19. Survey of clinical doses from computed tomography examinations in the Canadian province of Manitoba.

    PubMed

    A Elbakri, Idris; D C Kirkpatrick, Iain

    2013-12-01

    The purpose of this study was to document CT doses for common CT examinations performed throughout the province of Manitoba. Survey forms were sent out to all provincial CT sites. Thirteen out of sixteen (81 %) sites participated. The authors assessed scans of the brain, routine abdomen-pelvis, routine chest, sinuses, lumbar spine, low-dose lung nodule studies, CT pulmonary angiograms, CT KUBs, CT colonographies and combination chest-abdomen-pelvis exams. Sites recorded scanner model, protocol techniques and patient and dose data for 100 consecutive patients who were scanned with any of the aforementioned examinations. Mean effective doses and standard deviations for the province and for individual scanners were computed. The Kruskal-Wallis test was used to compare the variability of effective doses amongst scanners. The t test was used to compare doses and their provincial ranges between newer and older scanners and scanners that used dose saving tools and those that did not. Abdomen-pelvis, chest and brain scans accounted for over 70 % of scans. Their mean effective doses were 18.0 ± 6.7, 13.2 ± 6.4 and 3.0 ± 1.0 mSv, respectively. Variations in doses amongst scanners were statistically significant. Most examinations were performed at 120 kVp, and no lower kVp was used. Dose variations due to scanner age and use of dose saving tools were not statistically significant. Clinical CT doses in Manitoba are broadly similar to but higher than those reported in other Canadian provinces. Results suggest that further dose reduction can be achieved by modifying scanning techniques, such as using lower kVp. Wide variation in doses amongst different scanners suggests that standardisation of scanning protocols can reduce patient dose. New technological advances, such as dose-reduction software algorithms, can be adopted to reduce patient dose. PMID:23803227

  20. Algorithm for Autonomous Landing

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2011-01-01

    Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.

  1. Advanced software algorithms

    SciTech Connect

    Berry, K.; Dayton, S.

    1996-10-28

    Citibank was using a data collection system to create a one-time-only mailing history on prospective credit card customers that was becoming dated in its time to market requirements and as such was in need of performance improvements. To compound problems with their existing system, the assurance of the quality of the data matching process was manpower intensive and needed to be automated. Analysis, design, and prototyping capabilities involving information technology were areas of expertise provided by DOE-LMES Data Systems Research and Development (DSRD) program. The goal of this project was for Data Systems Research and Development (DSRD) to analyze the current Citibank credit card offering system and suggest and prototype technology improvements that would result in faster processing with quality as good as the current system. Technologies investigated include: a high-speed network of reduced instruction set computing (RISC) processors for loosely coupled parallel processing, tightly coupled, high performance parallel processing, higher order computer languages such as `C`, fuzzy matching algorithms applied to very large data files, relational database management system, and advanced programming techniques.

  2. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  3. Computer algorithm for coding gain

    NASA Technical Reports Server (NTRS)

    Dodd, E. E.

    1974-01-01

    Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.

  4. Cascade Error Projection Learning Algorithm

    NASA Technical Reports Server (NTRS)

    Duong, T. A.; Stubberud, A. R.; Daud, T.

    1995-01-01

    A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.

  5. Monte Carlo verification of IMRT dose distributions from a commercial treatment planning optimization system

    NASA Astrophysics Data System (ADS)

    Ma, C.-M.; Pawlicki, T.; Jiang, S. B.; Li, J. S.; Deng, J.; Mok, E.; Kapur, A.; Xing, L.; Ma, L.; Boyer, A. L.

    2000-09-01

    The purpose of this work was to use Monte Carlo simulations to verify the accuracy of the dose distributions from a commercial treatment planning optimization system (Corvus, Nomos Corp., Sewickley, PA) for intensity-modulated radiotherapy (IMRT). A Monte Carlo treatment planning system has been implemented clinically to improve and verify the accuracy of radiotherapy dose calculations. Further modifications to the system were made to compute the dose in a patient for multiple fixed-gantry IMRT fields. The dose distributions in the experimental phantoms and in the patients were calculated and used to verify the optimized treatment plans generated by the Corvus system. The Monte Carlo calculated IMRT dose distributions agreed with the measurements to within 2% of the maximum dose for all the beam energies and field sizes for both the homogeneous and heterogeneous phantoms. The dose distributions predicted by the Corvus system, which employs a finite-size pencil beam (FSPB) algorithm, agreed with the Monte Carlo simulations and measurements to within 4% in a cylindrical water phantom with various hypothetical target shapes. Discrepancies of more than 5% (relative to the prescribed target dose) in the target region and over 20% in the critical structures were found in some IMRT patient calculations. The FSPB algorithm as implemented in the Corvus system is adequate for homogeneous phantoms (such as prostate) but may result in significant under- or over-estimation of the dose in some cases involving heterogeneities such as the air-tissue, lung-tissue and tissue-bone interfaces.

  6. Validation of a track repeating algorithm for intensity modulated proton therapy: clinical cases study

    NASA Astrophysics Data System (ADS)

    Yepes, Pablo P.; Eley, John G.; Liu, Amy; Mirkovic, Dragan; Randeniya, Sharmalee; Titt, Uwe; Mohan, Radhe

    2016-04-01

    Monte Carlo (MC) methods are acknowledged as the most accurate technique to calculate dose distributions. However, due its lengthy calculation times, they are difficult to utilize in the clinic or for large retrospective studies. Track-repeating algorithms, based on MC-generated particle track data in water, accelerate dose calculations substantially, while essentially preserving the accuracy of MC. In this study, we present the validation of an efficient dose calculation algorithm for intensity modulated proton therapy, the fast dose calculator (FDC), based on a track-repeating technique. We validated the FDC algorithm for 23 patients, which included 7 brain, 6 head-and-neck, 5 lung, 1 spine, 1 pelvis and 3 prostate cases. For validation, we compared FDC-generated dose distributions with those from a full-fledged Monte Carlo based on GEANT4 (G4). We compared dose-volume-histograms, 3D-gamma-indices and analyzed a series of dosimetric indices. More than 99% of the voxels in the voxelized phantoms describing the patients have a gamma-index smaller than unity for the 2%/2 mm criteria. In addition the difference relative to the prescribed dose between the dosimetric indices calculated with FDC and G4 is less than 1%. FDC reduces the calculation times from 5 ms per proton to around 5 μs.

  7. Validation of a track repeating algorithm for intensity modulated proton therapy: clinical cases study.

    PubMed

    Yepes, Pablo P; Eley, John G; Liu, Amy; Mirkovic, Dragan; Randeniya, Sharmalee; Titt, Uwe; Mohan, Radhe

    2016-04-01

    Monte Carlo (MC) methods are acknowledged as the most accurate technique to calculate dose distributions. However, due its lengthy calculation times, they are difficult to utilize in the clinic or for large retrospective studies. Track-repeating algorithms, based on MC-generated particle track data in water, accelerate dose calculations substantially, while essentially preserving the accuracy of MC. In this study, we present the validation of an efficient dose calculation algorithm for intensity modulated proton therapy, the fast dose calculator (FDC), based on a track-repeating technique. We validated the FDC algorithm for 23 patients, which included 7 brain, 6 head-and-neck, 5 lung, 1 spine, 1 pelvis and 3 prostate cases. For validation, we compared FDC-generated dose distributions with those from a full-fledged Monte Carlo based on GEANT4 (G4). We compared dose-volume-histograms, 3D-gamma-indices and analyzed a series of dosimetric indices. More than 99% of the voxels in the voxelized phantoms describing the patients have a gamma-index smaller than unity for the 2%/2 mm criteria. In addition the difference relative to the prescribed dose between the dosimetric indices calculated with FDC and G4 is less than 1%. FDC reduces the calculation times from 5 ms per proton to around 5 μs. PMID:26961764

  8. SU-E-T-344: Validation and Clinical Experience of Eclipse Electron Monte Carlo Algorithm (EMC)

    SciTech Connect

    Pokharel, S; Rana, S

    2014-06-01

    Purpose: The purpose of this study is to validate Eclipse Electron Monte Carlo (Algorithm for routine clinical uses. Methods: The PTW inhomogeneity phantom (T40037) with different combination of heterogeneous slabs has been CT-scanned with Philips Brilliance 16 slice scanner. The phantom contains blocks of Rando Alderson materials mimicking lung, Polystyrene (Tissue), PTFE (Bone) and PMAA. The phantom has 30×30×2.5 cm base plate with 2cm recesses to insert inhomogeneity. The detector systems used in this study are diode, tlds and Gafchromic EBT2 films. The diode and tlds were included in CT scans. The CT sets are transferred to Eclipse treatment planning system. Several plans have been created with Eclipse Monte Carlo (EMC) algorithm 11.0.21. Measurements have been carried out in Varian TrueBeam machine for energy from 6–22mev. Results: The measured and calculated doses agreed very well for tissue like media. The agreement was reasonably okay for the presence of lung inhomogeneity. The point dose agreement was within 3.5% and Gamma passing rate at 3%/3mm was greater than 93% except for 6Mev(85%). The disagreement can reach as high as 10% in the presence of bone inhomogeneity. This is due to eclipse reporting dose to the medium as opposed to the dose to the water as in conventional calculation engines. Conclusion: Care must be taken when using Varian Eclipse EMC algorithm for dose calculation for routine clinical uses. The algorithm dose not report dose to water in which most of the clinical experiences are based on rather it just reports dose to medium directly. In the presence of inhomogeneity such as bone, the dose discrepancy can be as high as 10% or even more depending on the location of normalization point or volume. As Radiation oncology as an empirical science, care must be taken before using EMC reported monitor units for clinical uses.

  9. Low-dose CT reconstruction via edge-preserving total variation regularization.

    PubMed

    Tian, Zhen; Jia, Xun; Yuan, Kehong; Pan, Tinsu; Jiang, Steve B

    2011-09-21

    High radiation dose in computed tomography (CT) scans increases the lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with total variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low-contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV (EPTV) regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing energy consisting of an EPTV norm and a data fidelity term posed by the x-ray projections. The EPTV term is proposed to preferentially perform smoothing only on the non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original TV norm. During the reconstruction process, the pixels at the edges would be gradually identified and given low penalty weight. Our iterative algorithm is implemented on graphics processing unit to improve its speed. We test our reconstruction algorithm on a digital NURBS-based cardiac-troso phantom, a physical chest phantom and a Catphan phantom. Reconstruction results from a conventional filtered backprojection (FBP) algorithm and a TV regularization method without edge-preserving penalty are also presented for comparison purposes. The experimental results illustrate that both the TV-based algorithm and our EPTV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under a low-dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low-contrast structures and therefore maintain acceptable spatial resolution

  10. The Chopthin Algorithm for Resampling

    NASA Astrophysics Data System (ADS)

    Gandy, Axel; Lau, F. Din-Houn

    2016-08-01

    Resampling is a standard step in particle filters and more generally sequential Monte Carlo methods. We present an algorithm, called chopthin, for resampling weighted particles. In contrast to standard resampling methods the algorithm does not produce a set of equally weighted particles; instead it merely enforces an upper bound on the ratio between the weights. Simulation studies show that the chopthin algorithm consistently outperforms standard resampling methods. The algorithms chops up particles with large weight and thins out particles with low weight, hence its name. It implicitly guarantees a lower bound on the effective sample size. The algorithm can be implemented efficiently, making it practically useful. We show that the expected computational effort is linear in the number of particles. Implementations for C++, R (on CRAN), Python and Matlab are available.

  11. CORDIC algorithms in four dimensions

    NASA Astrophysics Data System (ADS)

    Delosme, Jean-Marc; Hsiao, Shen-Fu

    1990-11-01

    CORDIC algorithms offer an attractive alternative to multiply-and-add based algorithms for the implementation of two-dimensional rotations preserving either norm: (x2 + 2) or (x2 _ y2)/2 Indeed these norms whose computation is a significant part of the evaluation of the two-dimensional rotations are computed much more easily by the CORDIC algorithms. However the part played by norm computations in the evaluation of rotations becomes quickly small as the dimension of the space increases. Thus in spaces of dimension 5 or more there is no practical alternative to multiply-and-add based algorithms. In the intermediate region dimensions 3 and 4 extensions of the CORDIC algorithms are an interesting option. The four-dimensional extensions are particularly elegant and are the main object of this paper.

  12. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  13. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  14. An Artificial Immune Univariate Marginal Distribution Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Qingbin; Kang, Shuo; Gao, Junxiang; Wu, Song; Tian, Yanping

    Hybridization is an extremely effective way of improving the performance of the Univariate Marginal Distribution Algorithm (UMDA). Owing to its diversity and memory mechanisms, artificial immune algorithm has been widely used to construct hybrid algorithms with other optimization algorithms. This paper proposes a hybrid algorithm which combines the UMDA with the principle of general artificial immune algorithm. Experimental results on deceptive function of order 3 show that the proposed hybrid algorithm can get more building blocks (BBs) than the UMDA.

  15. Spatial dose distribution in polymer pipes exposed to electron beam

    NASA Astrophysics Data System (ADS)

    Ponomarev, Alexander V.

    2016-01-01

    Non-uniform distribution of absorbed dose in cross-section of any polymeric pipe is caused by non-uniform thickness of polymer layer penetrated by unidirectional electron beam. The special computer program was created for a prompt estimation of dose non-uniformity in pipes subjected to an irradiation by 1-10 MeV electron beam. Irrespective of electron beam energy, the local doses absorbed in the bulk of a material can be calculated on the basis of the universal correlations offered in the work. Incomplete deceleration of electrons in shallow layers of a polymer was taken into account. Possibilities for wide variation of pipe sizes, polymer properties and irradiation modes were provided by the algorithm. Both the unilateral and multilateral irradiation can be simulated.

  16. A study of the dosimetry of small field photon beams used in intensity-modulated radiation therapy in inhomogeneous media: Monte Carlo simulations and algorithm comparisons and corrections

    NASA Astrophysics Data System (ADS)

    Jones, Andrew Osler

    There is an increasing interest in the use of inhomogeneity corrections for lung, air, and bone in radiotherapy treatment planning. Traditionally, corrections based on physical density have been used. Modern algorithms use the electron density derived from CT images. Small fields are used in both conformal radiotherapy and IMRT, however their beam characteristics in inhomogeneous media have not been extensively studied. This work compares traditional and modern treatment planning algorithms to Monte Carlo simulations in and near low-density inhomogeneities. Field sizes ranging from 0.5 cm to 5 cm in diameter are projected onto a phantom containing inhomogeneities and depth dose curves are compared. Comparisons of the Dose Perturbation Factors (DPF) are presented as functions of density and field size. Dose Correction Factors (DCF), which scale the algorithms to the Monte Carlo data, are compared for each algorithm. Physical scaling algorithms such as Batho and Equivalent Pathlength (EPL) predict an increase in dose for small fields passing through lung tissue, where Monte Carlo simulations show a sharp dose drop. The physical model-based collapsed cone convolution (CCC) algorithm correctly predicts the dose drop, but does not accurately predict the magnitude. Because the model-based algorithms do not correctly account for the change in backscatter, the dose drop predicted by CCC occurs further downstream compared to that predicted by the Monte Carlo simulations. Beyond the tissue inhomogeneity all of the algorithms studied predict dose distributions in close agreement with Monte Carlo simulations. Dose-volume relationships are important in understanding the effects of radiation to the lung. Dose within the lung is affected by a complex function of beam energy, lung tissue density, and field size. Dose algorithms vary in their abilities to correctly predict the dose to the lung tissue. A thorough analysis of the effects of density, and field size on dose to the lung

  17. Dose refinement. ARAC's role

    SciTech Connect

    Ellis, J. S.; Sullivan, T. J.; Baskett, R. L.

    1998-06-01

    The Atmospheric Release Advisory Capability (ARAC), located at the Lawrence Livermore National Laboratory, since the late 1970's has been involved in assessing consequences from nuclear and other hazardous material releases into the atmosphere. ARAC's primary role has been emergency response. However, after the emergency phase, there is still a significant role for dispersion modeling. This work usually involves refining the source term and, hence, the dose to the populations affected as additional information becomes available in the form of source term estimates release rates, mix of material, and release geometry and any measurements from passage of the plume and deposition on the ground. Many of the ARAC responses have been documented elsewhere. 1 Some of the more notable radiological releases that ARAC has participated in the post-emergency phase have been the 1979 Three Mile Island nuclear power plant (NPP) accident outside Harrisburg, PA, the 1986 Chernobyl NPP accident in the Ukraine, and the 1996 Japan Tokai nuclear processing plant explosion. ARAC has also done post-emergency phase analyses for the 1978 Russian satellite COSMOS 954 reentry and subsequent partial burn up of its on board nuclear reactor depositing radioactive materials on the ground in Canada, the 1986 uranium hexafluoride spill in Gore, OK, the 1993 Russian Tomsk-7 nuclear waste tank explosion, and lesser releases of mostly tritium. In addition, ARAC has performed a key role in the contingency planning for possible accidental releases during the launch of spacecraft with radioisotope thermoelectric generators (RTGs) on board (i.e. Galileo, Ulysses, Mars-Pathfinder, and Cassini), and routinely exercises with the Federal Radiological Monitoring and Assessment Center (FRMAC) in preparation for offsite consequences of radiological releases from NPPs and nuclear weapon accidents or incidents. Several accident post-emergency phase assessments are discussed in this paper in order to illustrate

  18. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  19. Evaluation of lens dose from anterior electron beams: comparison of Pinnacle and Gafchromic EBT3 film.

    PubMed

    Sonier, Marcus; Wronski, Matt; Yeboah, Collins

    2015-01-01

    Lens dose is a concern during the treatment of facial lesions with anterior electron beams. Lead shielding is routinely employed to reduce lens dose and minimize late complications. The purpose of this work is twofold: 1) to measure dose pro-files under large-area lead shielding at the lens depth for clinical electron energies via film dosimetry; and 2) to assess the accuracy of the Pinnacle treatment plan-ning system in calculating doses under lead shields. First, to simulate the clinical geometry, EBT3 film and 4 cm wide lead shields were incorporated into a Solid Water phantom. With the lead shield inside the phantom, the film was positioned at a depth of 0.7 cm below the lead, while a variable thickness of solid water, simulating bolus, was placed on top. This geometry was reproduced in Pinnacle to calculate dose profiles using the pencil beam electron algorithm. The measured and calculated dose profiles were normalized to the central-axis dose maximum in a homogeneous phantom with no lead shielding. The resulting measured profiles, functions of bolus thickness and incident electron energy, can be used to estimate the lens dose under various clinical scenarios. These profiles showed a minimum lead margin of 0.5 cm beyond the lens boundary is required to shield the lens to ≤ 10% of the dose maximum. Comparisons with Pinnacle showed a consistent overestimation of dose under the lead shield with discrepancies of ~ 25% occur-ring near the shield edge. This discrepancy was found to increase with electron energy and bolus thickness and decrease with distance from the lead edge. Thus, the Pinnacle electron algorithm is not recommended for estimating lens dose in this situation. The film measurements, however, allow for a reasonable estimate of lens dose from electron beams and for clinicians to assess the lead margin required to reduce the lens dose to an acceptable level. PMID:27074448

  20. Sensitivity of a mixed field dosimetry algorithm to uncertainties in thermoluminescent element readings

    SciTech Connect

    Kearfott, K.J.; Samei, E.; Han, S.

    1995-03-01

    An error analysis of the effects of the algorithms used to resolve the deep and shallow dose components for mixed fields from multi-element thermoluminescent (TLD) badge systems was undertaken for a commonly used system. Errors were introduced independently into each of the four element readings for a badge, and the effects on the calculated dose equivalents were observed. A normal random number generator was then utilized to introduce simultaneous variations in the element readings for different uncertainties. The Department of Energy Laboratory Accrediatation Program radiation fields were investigated. Problems arising from the discontinuous nature of the algorithm were encountered for a number of radiation sources for which the algorithm misidentified the radiation field. Mixed fields of low energy photons and betas were found to present particular difficulties for the algorithm. The study demonstrates the importance of small fluctuations in the TLD element`s response in a multi-element approach. 24 refs., 5 figs., 7 tabs.

  1. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.; McMakin, A.H.

    1992-06-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is being managed and conducted by the Battelle Pacific Northwest Laboratories under contract with the Centers for Disease Control. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demography, food consumption, and agriculture; environmental pathways and dose estimates.

  2. Psychotropic dose equivalence in Japan.

    PubMed

    Inada, Toshiya; Inagaki, Ataru

    2015-08-01

    Psychotropic dose equivalence is an important concept when estimating the approximate psychotropic doses patients receive, and deciding on the approximate titration dose when switching from one psychotropic agent to another. It is also useful from a research viewpoint when defining and extracting specific subgroups of subjects. Unification of various agents into a single standard agent facilitates easier analytical comparisons. On the basis of differences in psychopharmacological prescription features, those of available psychotropic agents and their approved doses, and racial differences between Japan and other countries, psychotropic dose equivalency tables designed specifically for Japanese patients have been widely used in Japan since 1998. Here we introduce dose equivalency tables for: (i) antipsychotics; (ii) antiparkinsonian agents; (iii) antidepressants; and (iv) anxiolytics, sedatives and hypnotics available in Japan. Equivalent doses for the therapeutic effects of individual psychotropic compounds were determined principally on the basis of randomized controlled trials conducted in Japan and consensus among dose equivalency tables reported previously by psychopharmacological experts. As these tables are intended to merely suggest approximate standard values, physicians should use them with discretion. Updated information of psychotropic dose equivalence in Japan is available at http://www.jsprs.org/en/equivalence.tables/. [Correction added on 8 July 2015, after first online publication: A link to the updated information has been added.]. PMID:25601291

  3. Nanoparticle-based cancer treatment: can delivered dose and biological dose be reliably modeled and quantified?

    NASA Astrophysics Data System (ADS)

    Hoopes, P. Jack; Petryk, Alicia A.; Giustini, Andrew J.; Stigliano, Robert V.; D'Angelo, Robert N.; Tate, Jennifer A.; Cassim, Shiraz M.; Foreman, Allan; Bischof, John C.; Pearce, John A.; Ryan, Thomas

    2011-03-01

    Essential developments in the reliable and effective use of heat in medicine include: 1) the ability to model energy deposition and the resulting thermal distribution and tissue damage (Arrhenius models) over time in 3D, 2) the development of non-invasive thermometry and imaging for tissue damage monitoring, and 3) the development of clinically relevant algorithms for accurate prediction of the biological effect resulting from a delivered thermal dose in mammalian cells, tissues, and organs. The accuracy and usefulness of this information varies with the type of thermal treatment, sensitivity and accuracy of tissue assessment, and volume, shape, and heterogeneity of the tumor target and normal tissue. That said, without the development of an algorithm that has allowed the comparison and prediction of the effects of hyperthermia in a wide variety of tumor and normal tissues and settings (cumulative equivalent minutes/ CEM), hyperthermia would never have achieved clinical relevance. A new hyperthermia technology, magnetic nanoparticle-based hyperthermia (mNPH), has distinct advantages over the previous techniques: the ability to target the heat to individual cancer cells (with a nontoxic nanoparticle), and to excite the nanoparticles noninvasively with a noninjurious magnetic field, thus sparing associated normal cells and greatly improving the therapeutic ratio. As such, this modality has great potential as a primary and adjuvant cancer therapy. Although the targeted and safe nature of the noninvasive external activation (hysteretic heating) are a tremendous asset, the large number of therapy based variables and the lack of an accurate and useful method for predicting, assessing and quantifying mNP dose and treatment effect is a major obstacle to moving the technology into routine clinical practice. Among other parameters, mNPH will require the accurate determination of specific nanoparticle heating capability, the total nanoparticle content and biodistribution in

  4. Effect of Acuros XB algorithm on monitor units for stereotactic body radiotherapy planning of lung cancer

    SciTech Connect

    Khan, Rao F. Villarreal-Barajas, Eduardo; Lau, Harold; Liu, Hong-Wei

    2014-04-01

    Stereotactic body radiotherapy (SBRT) is a curative regimen that uses hypofractionated radiation-absorbed dose to achieve a high degree of local control in early stage non–small cell lung cancer (NSCLC). In the presence of heterogeneities, the dose calculation for the lungs becomes challenging. We have evaluated the dosimetric effect of the recently introduced advanced dose-calculation algorithm, Acuros XB (AXB), for SBRT of NSCLC. A total of 97 patients with early-stage lung cancer who underwent SBRT at our cancer center during last 4 years were included. Initial clinical plans were created in Aria Eclipse version 8.9 or prior, using 6 to 10 fields with 6-MV beams, and dose was calculated using the anisotropic analytic algorithm (AAA) as implemented in Eclipse treatment planning system. The clinical plans were recalculated in Aria Eclipse 11.0.21 using both AAA and AXB algorithms. Both sets of plans were normalized to the same prescription point at the center of mass of the target. A secondary monitor unit (MU) calculation was performed using commercial program RadCalc for all of the fields. For the planning target volumes ranging from 19 to 375 cm{sup 3}, a comparison of MUs was performed for both set of algorithms on field and plan basis. In total, variation of MUs for 677 treatment fields was investigated in terms of equivalent depth and the equivalent square of the field. Overall, MUs required by AXB to deliver the prescribed dose are on an average 2% higher than AAA. Using a 2-tailed paired t-test, the MUs from the 2 algorithms were found to be significantly different (p < 0.001). The secondary independent MU calculator RadCalc underestimates the required MUs (on an average by 4% to 5%) in the lung relative to either of the 2 dose algorithms.

  5. The Dropout Learning Algorithm

    PubMed Central

    Baldi, Pierre; Sadowski, Peter

    2014-01-01

    Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879

  6. Wavelet periodicity detection algorithms

    NASA Astrophysics Data System (ADS)

    Benedetto, John J.; Pfander, Goetz E.

    1998-10-01

    This paper deals with the analysis of time series with respect to certain known periodicities. In particular, we shall present a fast method aimed at detecting periodic behavior inherent in noise data. The method is composed of three steps: (1) Non-noisy data are analyzed through spectral and wavelet methods to extract specific periodic patterns of interest. (2) Using these patterns, we construct an optimal piecewise constant wavelet designed to detect the underlying periodicities. (3) We introduce a fast discretized version of the continuous wavelet transform, as well as waveletgram averaging techniques, to detect occurrence and period of these periodicities. The algorithm is formulated to provide real time implementation. Our procedure is generally applicable to detect locally periodic components in signals s which can be modeled as s(t) equals A(t)F(h(t)) + N(t) for t in I, where F is a periodic signal, A is a non-negative slowly varying function, and h is strictly increasing with h' slowly varying, N denotes background activity. For example, the method can be applied in the context of epileptic seizure detection. In this case, we try to detect seizure periodics in EEG and ECoG data. In the case of ECoG data, N is essentially 1/f noise. In the case of EEG data and for t in I,N includes noise due to cranial geometry and densities. In both cases N also includes standard low frequency rhythms. Periodicity detection has other applications including ocean wave prediction, cockpit motion sickness prediction, and minefield detection.

  7. Scheduling with genetic algorithms

    NASA Technical Reports Server (NTRS)

    Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.

    1994-01-01

    In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.

  8. MO-PIS-Exhibit Hall-01: Imaging: CT Dose Optimization Technologies I

    SciTech Connect

    Denison, K; Smith, S

    2014-06-15

    Partners in Solutions is an exciting new program in which AAPM partners with our vendors to present practical “hands-on” information about the equipment and software systems that we use in our clinics. The imaging topic this year is CT scanner dose optimization capabilities. Note that the sessions are being held in a special purpose room built on the Exhibit Hall Floor, to encourage further interaction with the vendors. Dose Optimization Capabilities of GE Computed Tomography Scanners Presentation Time: 11:15 – 11:45 AM GE Healthcare is dedicated to the delivery of high quality clinical images through the development of technologies, which optimize the application of ionizing radiation. In computed tomography, dose management solutions fall into four categories: employs projection data and statistical modeling to decrease noise in the reconstructed image - creating an opportunity for mA reduction in the acquisition of diagnostic images. Veo represents true Model Based Iterative Reconstruction (MBiR). Using high-level algorithms in tandem with advanced computing power, Veo enables lower pixel noise standard deviation and improved spatial resolution within a single image. Advanced Adaptive Image Filters allow for maintenance of spatial resolution while reducing image noise. Examples of adaptive image space filters include Neuro 3-D filters and Cardiac Noise Reduction Filters. AutomA adjusts mA along the z-axis and is the CT equivalent of auto exposure control in conventional x-ray systems. Dynamic Z-axis Tracking offers an additional opportunity for dose reduction in helical acquisitions while SmartTrack Z-axis Tracking serves to ensure beam, collimator and detector alignment during tube rotation. SmartmA provides angular mA modulation. ECG Helical Modulation reduces mA during the systolic phase of the heart cycle. SmartBeam optimization uses bowtie beam-shaping hardware and software to filter off-axis x-rays - minimizing dose and reducing x-ray scatter. The

  9. Optimal radiotherapy dose schedules under parametric uncertainty

    NASA Astrophysics Data System (ADS)

    Badri, Hamidreza; Watanabe, Yoichi; Leder, Kevin

    2016-01-01

    We consider the effects of parameter uncertainty on the optimal radiation schedule in the context of the linear-quadratic model. Our interest arises from the observation that if inter-patient variability in normal and tumor tissue radiosensitivity or sparing factor of the organs-at-risk (OAR) are not accounted for during radiation scheduling, the performance of the therapy may be strongly degraded or the OAR may receive a substantially larger dose than the allowable threshold. This paper proposes a stochastic radiation scheduling concept to incorporate inter-patient variability into the scheduling optimization problem. Our method is based on a probabilistic approach, where the model parameters are given by a set of random variables. Our probabilistic formulation ensures that our constraints are satisfied with a given probability, and that our objective function achieves a desired level with a stated probability. We used a variable transformation to reduce the resulting optimization problem to two dimensions. We showed that the optimal solution lies on the boundary of the feasible region and we implemented a branch and bound algorithm to find the global optimal solution. We demonstrated how the configuration of optimal schedules in the presence of uncertainty compares to optimal schedules in the absence of uncertainty (conventional schedule). We observed that in order to protect against the possibility of the model parameters falling into a region where the conventional schedule is no longer feasible, it is required to avoid extremal solutions, i.e. a single large dose or very large total dose delivered over a long period. Finally, we performed numerical experiments in the setting of head and neck tumors including several normal tissues to reveal the effect of parameter uncertainty on optimal schedules and to evaluate the sensitivity of the solutions to the choice of key model parameters.

  10. Monte Carlo- versus pencil-beam-/collapsed-cone-dose calculation in a heterogeneous multi-layer phantom

    NASA Astrophysics Data System (ADS)

    Krieger, Thomas; Sauer, Otto A.

    2005-03-01

    The aim of this work was to evaluate the accuracy of dose predicted in heterogeneous media by a pencil beam (PB), a collapsed cone (CC) and a Monte Carlo (MC) algorithm. For this purpose, a simple multi-layer phantom composed of Styrofoam and white polystyrene was irradiated with 10 × 10 cm2 as well as 20 × 20 cm2 open 6 MV photon fields. The beam axis was aligned parallel to the layers and various field offsets were applied. Thereby, the amount of lateral scatter was controlled. Dose measurements were performed with an ionization chamber positioned both in the central layer of white polystyrene and the adjacent layers of Styrofoam. It was found that, in white polystyrene, both MC and CC calculations agreed satisfactorily with the measurements whereas the PB algorithm calculated 12% higher doses on average. By studying off-axis dose profiles the observed differences in the calculation results increased dramatically for the three algorithms. In the regions of low density CC calculated 10% (8%) lower doses for the 10 × 10 cm2 (20 × 20 cm2) fields than MC. The MC data on the other hand agreed well with the measurements, presuming that proper replacement correction for the ionization chamber embedded in Styrofoam was performed. PB results evidently did not account for the scattering geometry and were therefore not really comparable. Our investigations showed that the PB algorithm generates very large errors for the dose in the vicinity of interfaces and within low-density regions. We also found that for the used CC algorithm large deviations for the absolute dose (dose/monitor unit) occur in regions of electronic disequilibrium. The performance might be improved by better adapted parameters. Therefore, we recommend a careful investigation of the accuracy for dose calculations in heterogeneous media for each beam data set and algorithm.

  11. SU-E-J-89: Motion Effects On Organ Dose in Respiratory Gated Stereotactic Body Radiation Therapy

    SciTech Connect

    Wang, T; Zhu, L; Khan, M; Landry, J; Rajpara, R; Hawk, N

    2014-06-01

    Purpose: Existing reports on gated radiation therapy focus mainly on optimizing dose delivery to the target structure. This work investigates the motion effects on radiation dose delivered to organs at risk (OAR) in respiratory gated stereotactic body radiation therapy (SBRT). A new algorithmic tool of dose analysis is developed to evaluate the optimality of gating phase for dose sparing on OARs while ensuring adequate target coverage. Methods: Eight patients with pancreatic cancer were treated on a phase I prospective study employing 4DCT-based SBRT. For each patient, 4DCT scans are acquired and sorted into 10 respiratory phases (inhale-exhale- inhale). Treatment planning is performed on the average CT image. The average CT is spatially registered to other phases. The resultant displacement field is then applied on the plan dose map to estimate the actual dose map for each phase. Dose values of each voxel are fitted to a sinusoidal function. Fitting parameters of dose variation, mean delivered dose and optimal gating phase for each voxel over respiration cycle are mapped on the dose volume. Results: The sinusoidal function accurately models the dose change during respiratory motion (mean fitting error 4.6%). In the eight patients, mean dose variation is 3.3 Gy on OARs with maximum of 13.7 Gy. Two patients have about 100cm{sup 3} volumes covered by more than 5 Gy deviation. The mean delivered dose maps are similar to plan dose with slight deformation. The optimal gating phase highly varies across the patient, with phase 5 or 6 on about 60% of the volume, and phase 0 on most of the rest. Conclusion: A new algorithmic tool is developed to conveniently quantify dose deviation on OARs from plan dose during the respiratory cycle. The proposed software facilitates the treatment planning process by providing the optimal respiratory gating phase for dose sparing on each OAR.

  12. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  13. Helical tomotherapy superficial dose measurements

    SciTech Connect

    Ramsey, Chester R.; Seibert, Rebecca M.; Robison, Benjamin; Mitchell, Martha

    2007-08-15

    Helical tomotherapy is a treatment technique that is delivered from a 6 MV fan beam that traces a helical path while the couch moves linearly into the bore. In order to increase the treatment delivery dose rate, helical tomotherapy systems do not have a flattening filter. As such, the dose distributions near the surface of the patient may be considerably different from other forms of intensity-modulated delivery. The purpose of this study was to measure the dose distributions near the surface for helical tomotherapy plans with a varying separation between the target volume and the surface of an anthropomorphic phantom. A hypothetical planning target volume (PTV) was defined on an anthropomorphic head phantom to simulate a 2.0 Gy per fraction IMRT parotid-sparing head and neck treatment of the upper neck nodes. A total of six target volumes were created with 0, 1, 2, 3, 4, and 5 mm of separation between the surface of the phantom and the outer edge of the PTV. Superficial doses were measured for each of the treatment deliveries using film placed in the head phantom and thermoluminescent dosimeters (TLDs) placed on the phantom's surface underneath an immobilization mask. In the 0 mm test case where the PTV extends to the phantom surface, the mean TLD dose was 1.73{+-}0.10 Gy (or 86.6{+-}5.1% of the prescribed dose). The measured superficial dose decreases to 1.23{+-}0.10 Gy (61.5{+-}5.1% of the prescribed dose) for a PTV-surface separation of 5 mm. The doses measured by the TLDs indicated that the tomotherapy treatment planning system overestimates superficial doses by 8.9{+-}3.2%. The radiographic film dose for the 0 mm test case was 1.73{+-}0.07 Gy, as compared to the calculated dose of 1.78{+-}0.05 Gy. Given the results of the TLD and film measurements, the superficial calculated doses are overestimated between 3% and 13%. Without the use of bolus, tumor volumes that extend to the surface may be underdosed. As such, it is recommended that bolus be added for these

  14. A {gamma} dose distribution evaluation technique using the k-d tree for nearest neighbor searching

    SciTech Connect

    Yuan Jiankui; Chen Weimin

    2010-09-15

    Purpose: The authors propose an algorithm based on the k-d tree for nearest neighbor searching to improve the {gamma} calculation time for 2D and 3D dose distributions. Methods: The {gamma} calculation method has been widely used for comparisons of dose distributions in clinical treatment plans and quality assurances. By specifying the acceptable dose and distance-to-agreement criteria, the method provides quantitative measurement of the agreement between the reference and evaluation dose distributions. The {gamma} value indicates the acceptability. In regions where {gamma}{<=}1, the predefined criterion is satisfied and thus the agreement is acceptable; otherwise, the agreement fails. Although the concept of the method is not complicated and a quick naieve implementation is straightforward, an efficient and robust implementation is not trivial. Recent algorithms based on exhaustive searching within a maximum radius, the geometric Euclidean distance, and the table lookup method have been proposed to improve the computational time for multidimensional dose distributions. Motivated by the fact that the least searching time for finding a nearest neighbor can be an O(log N) operation with a k-d tree, where N is the total number of the dose points, the authors propose an algorithm based on the k-d tree for the {gamma} evaluation in this work. Results: In the experiment, the authors found that the average k-d tree construction time per reference point is O(log N), while the nearest neighbor searching time per evaluation point is proportional to O(N{sup 1/k}), where k is between 2 and 3 for two-dimensional and three-dimensional dose distributions, respectively. Conclusions: Comparing with other algorithms such as exhaustive search and sorted list O(N), the k-d tree algorithm for {gamma} evaluation is much more efficient.

  15. Objective assessment of image quality and dose reduction in CT iterative reconstruction

    SciTech Connect

    Vaishnav, J. Y. Jung, W. C.; Popescu, L. M.; Zeng, R.; Myers, K. J.

    2014-07-15

    Purpose: Iterative reconstruction (IR) algorithms have the potential to reduce radiation dose in CT diagnostic imaging. As these algorithms become available on the market, a standardizable method of quantifying the dose reduction that a particular IR method can achieve would be valuable. Such a method would assist manufacturers in making promotional claims about dose reduction, buyers in comparing different devices, physicists in independently validating the claims, and the United States Food and Drug Administration in regulating the labeling of CT devices. However, the nonlinear nature of commercially available IR algorithms poses challenges to objectively assessing image quality, a necessary step in establishing the amount of dose reduction that a given IR algorithm can achieve without compromising that image quality. This review paper seeks to consolidate information relevant to objectively assessing the quality of CT IR images, and thereby measuring the level of dose reduction that a given IR algorithm can achieve. Methods: The authors discuss task-based methods for assessing the quality of CT IR images and evaluating dose reduction. Results: The authors explain and review recent literature on signal detection and localization tasks in CT IR image quality assessment, the design of an appropriate phantom for these tasks, possible choices of observers (including human and model observers), and methods of evaluating observer performance. Conclusions: Standardizing the measurement of dose reduction is a problem of broad interest to the CT community and to public health. A necessary step in the process is the objective assessment of CT image quality, for which various task-based methods may be suitable. This paper attempts to consolidate recent literature that is relevant to the development and implementation of task-based methods for the assessment of CT IR image quality.

  16. Comparison of optimization algorithms in intensity-modulated radiation therapy planning

    NASA Astrophysics Data System (ADS)

    Kendrick, Rachel

    Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.

  17. 2D/3D registration algorithm for lung brachytherapy

    SciTech Connect

    Zvonarev, P. S.; Farrell, T. J.; Hunter, R.; Wierzbicki, M.; Hayward, J. E.; Sur, R. K.

    2013-02-15

    Purpose: A 2D/3D registration algorithm is proposed for registering orthogonal x-ray images with a diagnostic CT volume for high dose rate (HDR) lung brachytherapy. Methods: The algorithm utilizes a rigid registration model based on a pixel/voxel intensity matching approach. To achieve accurate registration, a robust similarity measure combining normalized mutual information, image gradient, and intensity difference was developed. The algorithm was validated using a simple body and anthropomorphic phantoms. Transfer catheters were placed inside the phantoms to simulate the unique image features observed during treatment. The algorithm sensitivity to various degrees of initial misregistration and to the presence of foreign objects, such as ECG leads, was evaluated. Results: The mean registration error was 2.2 and 1.9 mm for the simple body and anthropomorphic phantoms, respectively. The error was comparable to the interoperator catheter digitization error of 1.6 mm. Preliminary analysis of data acquired from four patients indicated a mean registration error of 4.2 mm. Conclusions: Results obtained using the proposed algorithm are clinically acceptable especially considering the complications normally encountered when imaging during lung HDR brachytherapy.

  18. Minimizing dose during fluoroscopic tracking through geometric performance feedback

    PubMed Central

    Siddique, S.; Fiume, E.; Jaffray, D. A.

    2011-01-01

    Purpose: There is a growing concern regarding the dose delivered during x-ray fluoroscopy guided procedures, particularly in interventional cardiology and neuroradiology, and in real-time tumor tracking radiotherapy and radiosurgery. Many of these procedures involve long treatment times, and as such, there is cause for concern regarding the dose delivered and the associated radiation related risks. An insufficient dose, however, may convey less geometric information, which may lead to inaccuracy and imprecision in intervention placement. The purpose of this study is to investigate a method for achieving the required tracking uncertainty for a given interventional procedure using minimal dose.Methods: A simple model is used to demonstrate that a relationship exists between imaging dose and tracking uncertainty. A feedback framework is introduced that exploits this relationship to modulate the tube current (and hence the dose) in order to maintain the required uncertainty for a given interventional procedure. This framework is evaluated in the context of a fiducial tracking problem associated with image-guided radiotherapy in the lung. A particle filter algorithm is used to robustly track the fiducial as it traverses through regions of high and low quantum noise. Published motion models are incorporated in a tracking test suite to evaluate the dose-localization performance trade-offs.Results: It is shown that using this framework, the entrance surface exposure can be reduced by up to 28.6% when feedback is employed to operate at a geometric tracking uncertainty of 0.3 mm.Conclusions: The analysis reveals a potentially powerful technique for dynamic optimization of fluoroscopic imaging parameters to control the applied dose by exploiting the trade-off between tracking uncertainty and x-ray exposure per frame. PMID:21776784

  19. Multi dose computed tomography image fusion based on hybrid sparse methodology.

    PubMed

    Venkataraman, Anuyogam; Alirezaie, Javad; Babyn, Paul; Ahmadian, Alireza

    2014-01-01

    With the increasing utilization of X-ray Computed Tomography (CT) in medical diagnosis, obtaining higher quality image with lower exposure to radiation has become a highly challenging task in image processing. In this paper, a novel sparse fusion algorithm is proposed to address the problem of lower Signal to Noise Ratio (SNR) in low dose CT images. Initial fused image is obtained by combining low dose and medium dose images in sparse domain, utilizing the Dual Tree Complex Wavelet Transform (DTCWT) dictionary which is trained by high dose image. And then, the strongly focused image is obtained by determining the pixels of source images which have high similarity with the pixels of the initial fused image. Final denoised image is obtained by fusing strongly focused image and decomposed sparse vectors of source images, thereby preserving the edges and other critical information needed for diagnosis. This paper demonstrates the effectiveness of the proposed algorithm both quantitatively and qualitatively. PMID:25570844

  20. Cluster algorithms and computational complexity

    NASA Astrophysics Data System (ADS)

    Li, Xuenan

    Cluster algorithms for the 2D Ising model with a staggered field have been studied and a new cluster algorithm for path sampling has been worked out. The complexity properties of Bak-Seppen model and the Growing network model have been studied by using the Computational Complexity Theory. The dynamic critical behavior of the two-replica cluster algorithm is studied. Several versions of the algorithm are applied to the two-dimensional, square lattice Ising model with a staggered field. The dynamic exponent for the full algorithm is found to be less than 0.5. It is found that odd translations of one replica with respect to the other together with global flips are essential for obtaining a small value of the dynamic exponent. The path sampling problem for the 1D Ising model is studied using both a local algorithm and a novel cluster algorithm. The local algorithm is extremely inefficient at low temperature, where the integrated autocorrelation time is found to be proportional to the fourth power of correlation length. The dynamic exponent of the cluster algorithm is found to be zero and therefore proved to be much more efficient than the local algorithm. The parallel computational complexity of the Bak-Sneppen evolution model is studied. It is shown that Bak-Sneppen histories can be generated by a massively parallel computer in a time that is polylog in the length of the history, which means that the logical depth of producing a Bak-Sneppen history is exponentially less than the length of the history. The parallel dynamics for generating Bak-Sneppen histories is contrasted to standard Bak-Sneppen dynamics. The parallel computational complexity of the Growing Network model is studied. The growth of the network with linear kernels is shown to be not complex and an algorithm with polylog parallel running time is found. The growth of the network with gamma ≥ 2 super-linear kernels can be realized by a randomized parallel algorithm with polylog expected running time.

  1. Routing Algorithm Exploits Spatial Relations

    NASA Technical Reports Server (NTRS)

    Okino, Clayton; Jennings, Esther

    2004-01-01

    A recently developed routing algorithm for broadcasting in an ad hoc wireless communication network takes account of, and exploits, the spatial relationships among the locations of nodes, in addition to transmission power levels and distances between the nodes. In contrast, most prior algorithms for discovering routes through ad hoc networks rely heavily on transmission power levels and utilize limited graph-topology techniques that do not involve consideration of the aforesaid spatial relationships. The present algorithm extracts the relevant spatial-relationship information by use of a construct denoted the relative-neighborhood graph (RNG).

  2. Linearization algorithms for line transfer

    SciTech Connect

    Scott, H.A.

    1990-11-06

    Complete linearization is a very powerful technique for solving multi-line transfer problems that can be used efficiently with a variety of transfer formalisms. The linearization algorithm we describe is computationally very similar to ETLA, but allows an effective treatment of strongly-interacting lines. This algorithm has been implemented (in several codes) with two different transfer formalisms in all three one-dimensional geometries. We also describe a variation of the algorithm that handles saturable laser transport. Finally, we present a combination of linearization with a local approximate operator formalism, which has been implemented in two dimensions and is being developed in three dimensions. 11 refs.

  3. Bayesian estimation of dose thresholds

    NASA Technical Reports Server (NTRS)

    Groer, P. G.; Carnes, B. A.

    2003-01-01

    An example is described of Bayesian estimation of radiation absorbed dose thresholds (subsequently simply referred to as dose thresholds) using a specific parametric model applied to a data set on mice exposed to 60Co gamma rays and fission neutrons. A Weibull based relative risk model with a dose threshold parameter was used to analyse, as an example, lung cancer mortality and determine the posterior density for the threshold dose after single exposures to 60Co gamma rays or fission neutrons from the JANUS reactor at Argonne National Laboratory. The data consisted of survival, censoring times and cause of death information for male B6CF1 unexposed and exposed mice. The 60Co gamma whole-body doses for the two exposed groups were 0.86 and 1.37 Gy. The neutron whole-body doses were 0.19 and 0.38 Gy. Marginal posterior densities for the dose thresholds for neutron and gamma radiation were calculated with numerical integration and found to have quite different shapes. The density of the threshold for 60Co is unimodal with a mode at about 0.50 Gy. The threshold density for fission neutrons declines monotonically from a maximum value at zero with increasing doses. The posterior densities for all other parameters were similar for the two radiation types.

  4. The Assessment of Effective Dose Equivalent Using Personnel Dosimeters

    NASA Astrophysics Data System (ADS)

    Xu, Xie

    From January 1994, U.S. nuclear plants must develop a technically rigorous approach for determining the effective dose equivalent for their work forces. This dissertation explains concepts associated with effective dose equivalent and describes how to assess effective dose equivalent by using conventional personnel dosimetry measurements. A Monte Carlo computer code, MCNP, was used to calculate photon transport through a model of the human body. Published mathematical phantoms of the human adult male and female were used to simulate irradiation from a variety of external radiation sources in order to calculate organ and tissue doses, as well as effective dose equivalent using weighting factors from ICRP Publication 26. The radiation sources considered were broad parallel photon beams incident on the body from 91 different angles and isotropic point sources located at 234 different locations in contact with or near the body. Monoenergetic photons of 0.08, 0.3, and 1.0 MeV were considered for both sources. Personnel dosimeters were simulated on the surface of the body and exposed to with the same sources. From these data, the influence of dosimeter position on dosimeter response was investigated. Different algorithms for assessing effective dose equivalent from personnel dosimeter responses were proposed and evaluated. The results indicate that the current single-badge approach is satisfactory for most common exposure situations encountered in nuclear plants, but additional conversion factors may be used when more accurate results become desirable. For uncommon exposures involving source situated at the back of the body or source located overhead, the current approach of using multi-badges and assigning the highest dose is overly conservative and unnecessarily expensive. For these uncommon exposures, a new algorithm, based on two dosimeters, one on the front of the body and another one on the back of the body, has been shown to yield conservative assessment of

  5. Inverse modeling of FIB milling by dose profile optimization

    NASA Astrophysics Data System (ADS)

    Lindsey, S.; Waid, S.; Hobler, G.; Wanzenböck, H. D.; Bertagnolli, E.

    2014-12-01

    FIB technologies possess a unique ability to form topographies that are difficult or impossible to generate with binary etching through typical photo-lithography. The ability to arbitrarily vary the spatial dose distribution and therefore the amount of milling opens possibilities for the production of a wide range of functional structures with applications in biology, chemistry, and optics. However in practice, the realization of these goals is made difficult by the angular dependence of the sputtering yield and redeposition effects that vary as the topography evolves. An inverse modeling algorithm that optimizes dose profiles, defined as the superposition of time invariant pixel dose profiles (determined from the beam parameters and pixel dwell times), is presented. The response of the target to a set of pixel dwell times in modeled by numerical continuum simulations utilizing 1st and 2nd order sputtering and redeposition, the resulting surfaces are evaluated with respect to a target topography in an error minimization routine. Two algorithms for the parameterization of pixel dwell times are presented, a direct pixel dwell time method, and an abstracted method that uses a refineable piecewise linear cage function to generate pixel dwell times from a minimal number of parameters. The cage function method demonstrates great flexibility and efficiency as compared to the direct fitting method with performance enhancements exceeding ∼10× as compared to direct fitting for medium to large simulation sets. Furthermore, the refineable nature of the cage function enables solutions to adapt to the desired target function. The optimization algorithm, although working with stationary dose profiles, is demonstrated to be applicable also outside the quasi-static approximation. Experimental data confirms the viability of the solutions for 5 × 7 μm deep lens like structures defined by 90 pixel dwell times.

  6. Exercise Dose in Clinical Practice.

    PubMed

    Wasfy, Meagan M; Baggish, Aaron L

    2016-06-01

    There is wide variability in the physical activity patterns of the patients in contemporary clinical cardiovascular practice. This review is designed to address the impact of exercise dose on key cardiovascular risk factors and on mortality. We begin by examining the body of literature that supports a dose-response relationship between exercise and cardiovascular disease risk factors, including plasma lipids, hypertension, diabetes mellitus, and obesity. We next explore the relationship between exercise dose and mortality by reviewing the relevant epidemiological literature underlying current physical activity guideline recommendations. We then expand this discussion to critically examine recent data pertaining to the impact of exercise dose at the lowest and highest ends of the spectrum. Finally, we provide a framework for how the key concepts of exercise dose can be integrated into clinical practice. PMID:27267537

  7. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Finch, S.M.

    1990-09-01

    This monthly report summarizes the technical progress and project status for the Hanford Environmental Dose Reconstruction (HEDR) Project being conducted at the Pacific Northwest Laboratory (PNL) under the direction of a Technical Steering Panel (TSP). The TSP is composed of experts in numerous technical fields related to this project and represents the interests of the public. The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms, environmental transport, environmental monitoring data, demographics, agriculture, food habits, environmental pathways and dose estimates. 3 figs.

  8. An algorithm to estimate the object support in truncated images

    SciTech Connect

    Hsieh, Scott S.; Nett, Brian E.; Cao, Guangzhi; Pelc, Norbert J.

    2014-07-15

    Purpose: Truncation artifacts in CT occur if the object to be imaged extends past the scanner field of view (SFOV). These artifacts impede diagnosis and could possibly introduce errors in dose plans for radiation therapy. Several approaches exist for correcting truncation artifacts, but existing correction algorithms do not accurately recover the skin line (or support) of the patient, which is important in some dose planning methods. The purpose of this paper was to develop an iterative algorithm that recovers the support of the object. Methods: The authors assume that the truncated portion of the image is made up of soft tissue of uniform CT number and attempt to find a shape consistent with the measured data. Each known measurement in the sinogram is interpreted as an estimate of missing mass along a line. An initial estimate of the object support is generated by thresholding a reconstruction made using a previous truncation artifact correction algorithm (e.g., water cylinder extrapolation). This object support is iteratively deformed to reduce the inconsistency with the measured data. The missing data are estimated using this object support to complete the dataset. The method was tested on simulated and experimentally truncated CT data. Results: The proposed algorithm produces a better defined skin line than water cylinder extrapolation. On the experimental data, the RMS error of the skin line is reduced by about 60%. For moderately truncated images, some soft tissue contrast is retained near the SFOV. As the extent of truncation increases, the soft tissue contrast outside the SFOV becomes unusable although the skin line remains clearly defined, and in reformatted images it varies smoothly from slice to slice as expected. Conclusions: The support recovery algorithm provides a more accurate estimate of the patient outline than thresholded, basic water cylinder extrapolation, and may be preferred in some radiation therapy applications.

  9. Fibonacci Numbers and Computer Algorithms.

    ERIC Educational Resources Information Center

    Atkins, John; Geist, Robert

    1987-01-01

    The Fibonacci Sequence describes a vast array of phenomena from nature. Computer scientists have discovered and used many algorithms which can be classified as applications of Fibonacci's sequence. In this article, several of these applications are considered. (PK)

  10. An onboard star identification algorithm

    NASA Astrophysics Data System (ADS)

    Ha, Kong; Femiano, Michael

    The paper presents the autonomous Initial Stellar Acquisition (ISA) algorithm developed for the X-Ray Timing Explorer for prividing the attitude quaternion within the desired accuracy, based on the one-axis attitude knowledge (through the use of the Digital Sun Sensor, CCD Star Trackers, and the onboard star catalog, OSC). Mathematical analysis leads to an accurate measure of the performance of the algorithm as a function of various parameters, such as the probability of a tracked star being in the OSC, the sensor noise level, and the number of stars matched. It is shown that the simplicity, tractability, and robustness of the ISA algorithm, compared to a general three-axis attiude determination algorithm, make it a viable on-board solution.

  11. Scheduling Jobs with Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ferrolho, António; Crisóstomo, Manuel

    Most scheduling problems are NP-hard, the time required to solve the problem optimally increases exponentially with the size of the problem. Scheduling problems have important applications, and a number of heuristic algorithms have been proposed to determine relatively good solutions in polynomial time. Recently, genetic algorithms (GA) are successfully used to solve scheduling problems, as shown by the growing numbers of papers. GA are known as one of the most efficient algorithms for solving scheduling problems. But, when a GA is applied to scheduling problems various crossovers and mutations operators can be applicable. This paper presents and examines a new concept of genetic operators for scheduling problems. A software tool called hybrid and flexible genetic algorithm (HybFlexGA) was developed to examine the performance of various crossover and mutation operators by computing simulations of job scheduling problems.

  12. Recursive Algorithm For Linear Regression

    NASA Technical Reports Server (NTRS)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  13. Algorithmic complexity of a protein

    NASA Astrophysics Data System (ADS)

    Dewey, T. Gregory

    1996-07-01

    The information contained in a protein's amino acid sequence dictates its three-dimensional structure. To quantitate the transfer of information that occurs in the protein folding process, the Kolmogorov information entropy or algorithmic complexity of the protein structure is investigated. The algorithmic complexity of an object provides a means of quantitating its information content. Recent results have indicated that the algorithmic complexity of microstates of certain statistical mechanical systems can be estimated from the thermodynamic entropy. In the present work, it is shown that the algorithmic complexity of a protein is given by its configurational entropy. Using this result, a quantitative estimate of the information content of a protein's structure is made and is compared to the information content of the sequence. Additionally, the mutual information between sequence and structure is determined. It is seen that virtually all the information contained in the protein structure is shared with the sequence.

  14. An onboard star identification algorithm

    NASA Technical Reports Server (NTRS)

    Ha, Kong; Femiano, Michael

    1993-01-01

    The paper presents the autonomous Initial Stellar Acquisition (ISA) algorithm developed for the X-Ray Timing Explorer for prividing the attitude quaternion within the desired accuracy, based on the one-axis attitude knowledge (through the use of the Digital Sun Sensor, CCD Star Trackers, and the onboard star catalog, OSC). Mathematical analysis leads to an accurate measure of the performance of the algorithm as a function of various parameters, such as the probability of a tracked star being in the OSC, the sensor noise level, and the number of stars matched. It is shown that the simplicity, tractability, and robustness of the ISA algorithm, compared to a general three-axis attiude determination algorithm, make it a viable on-board solution.

  15. Cascade Error Projection: A New Learning Algorithm

    NASA Technical Reports Server (NTRS)

    Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.

    1995-01-01

    A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.

  16. Belief network algorithms: A study of performance

    SciTech Connect

    Jitnah, N.

    1996-12-31

    This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.

  17. Radioactive Dose Assessment and NRC Verification of Licensee Dose Calculation.

    1994-09-16

    Version 00 PCDOSE was developed for the NRC to perform calculations to determine radioactive dose due to the annual averaged offsite release of liquid and gaseous effluent by U.S commercial nuclear power facilities. Using NRC approved dose assessment methodologies, it acts as an inspector's tool for verifying the compliance of the facility's dose assessment software. PCDOSE duplicates the calculations of the GASPAR II mainframe code as well as calculations using the methodologices of Reg. Guidemore » 1.109 Rev. 1 and NUREG-0133 by optional choice.« less

  18. Radioactive Dose Assessment and NRC Verification of Licensee Dose Calculation.

    SciTech Connect

    BOHN, TED S.

    1994-09-16

    Version 00 PCDOSE was developed for the NRC to perform calculations to determine radioactive dose due to the annual averaged offsite release of liquid and gaseous effluent by U.S commercial nuclear power facilities. Using NRC approved dose assessment methodologies, it acts as an inspector's tool for verifying the compliance of the facility's dose assessment software. PCDOSE duplicates the calculations of the GASPAR II mainframe code as well as calculations using the methodologices of Reg. Guide 1.109 Rev. 1 and NUREG-0133 by optional choice.

  19. Genetic algorithms as discovery programs

    SciTech Connect

    Hilliard, M.R.; Liepins, G.

    1986-01-01

    Genetic algorithms are mathematical counterparts to natural selection and gene recombination. As such, they have provided one of the few significant breakthroughs in machine learning. Used with appropriate reward functions and apportionment of credit, they have been successfully applied to gas pipeline operation, x-ray registration and mathematical optimization problems. This paper discusses the basics of genetic algorithms, describes a few successes, and reports on current progress at Oak Ridge National Laboratory in applications to set covering and simulated robots.

  20. A retrodictive stochastic simulation algorithm

    SciTech Connect

    Vaughan, T.G. Drummond, P.D.; Drummond, A.J.

    2010-05-20

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  1. Fully relativistic lattice Boltzmann algorithm

    SciTech Connect

    Romatschke, P.; Mendoza, M.; Succi, S.

    2011-09-15

    Starting from the Maxwell-Juettner equilibrium distribution, we develop a relativistic lattice Boltzmann (LB) algorithm capable of handling ultrarelativistic systems with flat, but expanding, spacetimes. The algorithm is validated through simulations of a quark-gluon plasma, yielding excellent agreement with hydrodynamic simulations. The present scheme opens the possibility of transferring the recognized computational advantages of lattice kinetic theory to the context of both weakly and ultrarelativistic systems.

  2. High-speed CORDIC algorithm

    NASA Astrophysics Data System (ADS)

    El-Guibaly, Fayez; Sabaa, A.

    1996-10-01

    In this paper, we introduce modifications on the classic CORDIC algorithm to reduce the number of iterations, and hence the rounding noise. The modified algorithm needs, at most, half the number of iterations to achieve the same accuracy as the classical one. The modifications are applicable to linear, circular and hyperbolic CORDIC in both vectoring and rotation modes. Simulations illustrate the effect of the new modifications.

  3. Localization algorithm for acoustic emission

    NASA Astrophysics Data System (ADS)

    Salinas, V.; Vargas, Y.; Ruzzante, J.; Gaete, L.

    2010-01-01

    In this paper, an iterative algorithm for localization of acoustic emission (AE) source is presented. The main advantage of the system is that it is independent of the 'ability' in the determination of signal level to triggering the signal by the researcher. The system was tested in cylindrical samples with an AE localized in a known position; the precision in the source determination was of about 2 mm, better than the precision obtained with classic localization algorithms (˜1 cm).