Sample records for dose spreading algorithm

  1. From prompt gamma distribution to dose: a novel approach combining an evolutionary algorithm and filtering based on Gaussian-powerlaw convolutions.

    PubMed

    Schumann, A; Priegnitz, M; Schoene, S; Enghardt, W; Rohling, H; Fiedler, F

    2016-10-07

    Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.

  2. Recommendations for dose calculations of lung cancer treatment plans treated with stereotactic ablative body radiotherapy (SABR)

    NASA Astrophysics Data System (ADS)

    Devpura, S.; Siddiqui, M. S.; Chen, D.; Liu, D.; Li, H.; Kumar, S.; Gordon, J.; Ajlouni, M.; Movsas, B.; Chetty, I. J.

    2014-03-01

    The purpose of this study was to systematically evaluate dose distributions computed with 5 different dose algorithms for patients with lung cancers treated using stereotactic ablative body radiotherapy (SABR). Treatment plans for 133 lung cancer patients, initially computed with a 1D-pencil beam (equivalent-path-length, EPL-1D) algorithm, were recalculated with 4 other algorithms commissioned for treatment planning, including 3-D pencil-beam (EPL-3D), anisotropic analytical algorithm (AAA), collapsed cone convolution superposition (CCC), and Monte Carlo (MC). The plan prescription dose was 48 Gy in 4 fractions normalized to the 95% isodose line. Tumors were classified according to location: peripheral tumors surrounded by lung (lung-island, N=39), peripheral tumors attached to the rib-cage or chest wall (lung-wall, N=44), and centrally-located tumors (lung-central, N=50). Relative to the EPL-1D algorithm, PTV D95 and mean dose values computed with the other 4 algorithms were lowest for "lung-island" tumors with smallest field sizes (3-5 cm). On the other hand, the smallest differences were noted for lung-central tumors treated with largest field widths (7-10 cm). Amongst all locations, dose distribution differences were most strongly correlated with tumor size for lung-island tumors. For most cases, convolution/superposition and MC algorithms were in good agreement. Mean lung dose (MLD) values computed with the EPL-1D algorithm were highly correlated with that of the other algorithms (correlation coefficient =0.99). The MLD values were found to be ~10% lower for small lung-island tumors with the model-based (conv/superposition and MC) vs. the correction-based (pencil-beam) algorithms with the model-based algorithms predicting greater low dose spread within the lungs. This study suggests that pencil beam algorithms should be avoided for lung SABR planning. For the most challenging cases, small tumors surrounded entirely by lung tissue (lung-island type), a Monte-Carlo-based algorithm may be warranted.

  3. Dose calculation algorithm of fast fine-heterogeneity correction for heavy charged particle radiotherapy.

    PubMed

    Kanematsu, Nobuyuki

    2011-04-01

    This work addresses computing techniques for dose calculations in treatment planning with proton and ion beams, based on an efficient kernel-convolution method referred to as grid-dose spreading (GDS) and accurate heterogeneity-correction method referred to as Gaussian beam splitting. The original GDS algorithm suffered from distortion of dose distribution for beams tilted with respect to the dose-grid axes. Use of intermediate grids normal to the beam field has solved the beam-tilting distortion. Interplay of arrangement between beams and grids was found as another intrinsic source of artifact. Inclusion of rectangular-kernel convolution in beam transport, to share the beam contribution among the nearest grids in a regulatory manner, has solved the interplay problem. This algorithmic framework was applied to a tilted proton pencil beam and a broad carbon-ion beam. In these cases, while the elementary pencil beams individually split into several tens, the calculation time increased only by several times with the GDS algorithm. The GDS and beam-splitting methods will complementarily enable accurate and efficient dose calculations for radiotherapy with protons and ions. Copyright © 2010 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  4. SU-F-J-133: Adaptive Radiation Therapy with a Four-Dimensional Dose Calculation Algorithm That Optimizes Dose Distribution Considering Breathing Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, I; Algan, O; Ahmad, S

    Purpose: To model patient motion and produce four-dimensional (4D) optimized dose distributions that consider motion-artifacts in the dose calculation during the treatment planning process. Methods: An algorithm for dose calculation is developed where patient motion is considered in dose calculation at the stage of the treatment planning. First, optimal dose distributions are calculated for the stationary target volume where the dose distributions are optimized considering intensity-modulated radiation therapy (IMRT). Second, a convolution-kernel is produced from the best-fitting curve which matches the motion trajectory of the patient. Third, the motion kernel is deconvolved with the initial dose distribution optimized for themore » stationary target to produce a dose distribution that is optimized in four-dimensions. This algorithm is tested with measured doses using a mobile phantom that moves with controlled motion patterns. Results: A motion-optimized dose distribution is obtained from the initial dose distribution of the stationary target by deconvolution with the motion-kernel of the mobile target. This motion-optimized dose distribution is equivalent to that optimized for the stationary target using IMRT. The motion-optimized and measured dose distributions are tested with the gamma index with a passing rate of >95% considering 3% dose-difference and 3mm distance-to-agreement. If the dose delivery per beam takes place over several respiratory cycles, then the spread-out of the dose distributions is only dependent on the motion amplitude and not affected by motion frequency and phase. This algorithm is limited to motion amplitudes that are smaller than the length of the target along the direction of motion. Conclusion: An algorithm is developed to optimize dose in 4D. Besides IMRT that provides optimal dose coverage for a stationary target, it extends dose optimization to 4D considering target motion. This algorithm provides alternative to motion management techniques such as beam-gating or breath-holding and has potential applications in adaptive radiation therapy.« less

  5. Quantifying the effect of air gap, depth, and range shifter thickness on TPS dosimetric accuracy in superficial PBS proton therapy.

    PubMed

    Shirey, Robert J; Wu, Hsinshun Terry

    2018-01-01

    This study quantifies the dosimetric accuracy of a commercial treatment planning system as functions of treatment depth, air gap, and range shifter thickness for superficial pencil beam scanning proton therapy treatments. The RayStation 6 pencil beam and Monte Carlo dose engines were each used to calculate the dose distributions for a single treatment plan with varying range shifter air gaps. Central axis dose values extracted from each of the calculated plans were compared to dose values measured with a calibrated PTW Markus chamber at various depths in RW3 solid water. Dose was measured at 12 depths, ranging from the surface to 5 cm, for each of the 18 different air gaps, which ranged from 0.5 to 28 cm. TPS dosimetric accuracy, defined as the ratio of calculated dose relative to the measured dose, was plotted as functions of depth and air gap for the pencil beam and Monte Carlo dose algorithms. The accuracy of the TPS pencil beam dose algorithm was found to be clinically unacceptable at depths shallower than 3 cm with air gaps wider than 10 cm, and increased range shifter thickness only added to the dosimetric inaccuracy of the pencil beam algorithm. Each configuration calculated with Monte Carlo was determined to be clinically acceptable. Further comparisons of the Monte Carlo dose algorithm to the measured spread-out Bragg Peaks of multiple fields used during machine commissioning verified the dosimetric accuracy of Monte Carlo in a variety of beam energies and field sizes. Discrepancies between measured and TPS calculated dose values can mainly be attributed to the ability (or lack thereof) of the TPS pencil beam dose algorithm to properly model secondary proton scatter generated in the range shifter. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  6. Development of a golden beam data set for the commissioning of a proton double-scattering system in a pencil-beam dose calculation algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slopsema, R. L., E-mail: rslopsema@floridaproton.org; Flampouri, S.; Yeung, D.

    2014-09-15

    Purpose: The purpose of this investigation is to determine if a single set of beam data, described by a minimal set of equations and fitting variables, can be used to commission different installations of a proton double-scattering system in a commercial pencil-beam dose calculation algorithm. Methods: The beam model parameters required to commission the pencil-beam dose calculation algorithm (virtual and effective SAD, effective source size, and pristine-peak energy spread) are determined for a commercial double-scattering system. These parameters are measured in a first room and parameterized as function of proton energy and nozzle settings by fitting four analytical equations tomore » the measured data. The combination of these equations and fitting values constitutes the golden beam data (GBD). To determine the variation in dose delivery between installations, the same dosimetric properties are measured in two additional rooms at the same facility, as well as in a single room at another facility. The difference between the room-specific measurements and the GBD is evaluated against tolerances that guarantee the 3D dose distribution in each of the rooms matches the GBD-based dose distribution within clinically reasonable limits. The pencil-beam treatment-planning algorithm is commissioned with the GBD. The three-dimensional dose distribution in water is evaluated in the four treatment rooms and compared to the treatment-planning calculated dose distribution. Results: The virtual and effective SAD measurements fall between 226 and 257 cm. The effective source size varies between 2.4 and 6.2 cm for the large-field options, and 1.0 and 2.0 cm for the small-field options. The pristine-peak energy spread decreases from 1.05% at the lowest range to 0.6% at the highest. The virtual SAD as well as the effective source size can be accurately described by a linear relationship as function of the inverse of the residual energy. An additional linear correction term as function of RM-step thickness is required for accurate parameterization of the effective SAD. The GBD energy spread is given by a linear function of the exponential of the beam energy. Except for a few outliers, the measured parameters match the GBD within the specified tolerances in all of the four rooms investigated. For a SOBP field with a range of 15 g/cm{sup 2} and an air gap of 25 cm, the maximum difference in the 80%–20% lateral penumbra between the GBD-commissioned treatment-planning system and measurements in any of the four rooms is 0.5 mm. Conclusions: The beam model parameters of the double-scattering system can be parameterized with a limited set of equations and parameters. This GBD closely matches the measured dosimetric properties in four different rooms.« less

  7. Incorporating partial shining effects in proton pencil-beam dose calculation

    NASA Astrophysics Data System (ADS)

    Li, Yupeng; Zhang, Xiaodong; Fwu Lii, Ming; Sahoo, Narayan; Zhu, Ron X.; Gillin, Michael; Mohan, Radhe

    2008-02-01

    A range modulator wheel (RMW) is an essential component in passively scattered proton therapy. We have observed that a proton beam spot may shine on multiple steps of the RMW. Proton dose calculation algorithms normally do not consider the partial shining effect, and thus overestimate the dose at the proximal shoulder of spread-out Bragg peak (SOBP) compared with the measurement. If the SOBP is adjusted to better fit the plateau region, the entrance dose is likely to be underestimated. In this work, we developed an algorithm that can be used to model this effect and to allow for dose calculations that better fit the measured SOBP. First, a set of apparent modulator weights was calculated without considering partial shining. Next, protons spilled from the accelerator reaching the modulator wheel were simplified as a circular spot of uniform intensity. A weight-splitting process was then performed to generate a set of effective modulator weights with the partial shining effect incorporated. The SOBPs of eight options, which are used to label different combinations of proton-beam energy and scattering devices, were calculated with the generated effective weights. Our algorithm fitted the measured SOBP at the proximal and entrance regions much better than the ones without considering partial shining effect for all SOBPs of the eight options. In a prostate patient, we found that dose calculation without considering partial shining effect underestimated the femoral head and skin dose.

  8. Dosimetric evaluation of a commercial proton spot scanning Monte-Carlo dose algorithm: comparisons against measurements and simulations

    NASA Astrophysics Data System (ADS)

    Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R.; St. James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles

    2017-10-01

    RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within  ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and  >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within  ±3% and distal fall-off to within 2 mm. In an anthropomorphic phantom, the gamma index (dose tolerance  =  3%, distance-to-agreement  =  3 mm) was greater than 90% for six out of seven planes using the RS-MC, and three out seven for the RS-PBA. The RS-MC algorithm demonstrated improved dosimetric accuracy over the RS-PBA in the presence of homogenous, heterogeneous and anthropomorphic phantoms. The computation performance of the RS-MC was similar to the RS-PBA algorithm. For complex disease sites like breast, head and neck, and lung cancer, the RS-MC algorithm will provide significantly more accurate treatment planning.

  9. Dosimetric evaluation of a commercial proton spot scanning Monte-Carlo dose algorithm: comparisons against measurements and simulations.

    PubMed

    Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R; St James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles

    2017-09-12

    RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within  ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and  >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within  ±3% and distal fall-off to within 2 mm. In an anthropomorphic phantom, the gamma index (dose tolerance  =  3%, distance-to-agreement  =  3 mm) was greater than 90% for six out of seven planes using the RS-MC, and three out seven for the RS-PBA. The RS-MC algorithm demonstrated improved dosimetric accuracy over the RS-PBA in the presence of homogenous, heterogeneous and anthropomorphic phantoms. The computation performance of the RS-MC was similar to the RS-PBA algorithm. For complex disease sites like breast, head and neck, and lung cancer, the RS-MC algorithm will provide significantly more accurate treatment planning.

  10. Safety reports on the off-label use of baclofen for alcohol-dependence: recommendations to improve causality assessment.

    PubMed

    Rolland, Benjamin; Auffret, Marine; Franchitto, Nicolas

    2016-06-01

    The off-label use of high-dose baclofen (HDB) for alcohol-dependence has recently spread. However, HDB has been associated with numerous reports of adverse events (AEs). Pharmacovigilance reporting is supposed to differentiate AEs from adverse drug reactions (ADRs), for which the causality of the drug is determined using validated methods. Since 2010, we found 20 publications on baclofen-related AEs in alcohol dependence, in Medline-referenced journals or national pharmacovigilance reports. We focused on whether these reports used causality algorithms, and provided essential elements for determining baclofen causality and excluding the involvement of alcohol and other psychoactive substances or psychotropic drugs. In half of the cases, no causality algorithm was used. Detailed information on baclofen dosing was found in 17 out of 20 (85%) articles, whereas alcohol doses were given only in 10 (50%) publications. Other psychoactive substances and psychotropic drugs were broached in 14 (70%) publications. future publications reporting suspected HDB-induced ADRs should use validated causality algorithms and provide sufficient amount of contextual information for excluding other potential causes. For HDB, the psychiatric history, and the longitudinal description of alcohol consumptions and associated doses of psychoactive substances or psychotropic medications should be detailed for every reported case.

  11. Computational method for the correction of proximity effect in electron-beam lithography (Poster Paper)

    NASA Astrophysics Data System (ADS)

    Chang, Chih-Yuan; Owen, Gerry; Pease, Roger Fabian W.; Kailath, Thomas

    1992-07-01

    Dose correction is commonly used to compensate for the proximity effect in electron lithography. The computation of the required dose modulation is usually carried out using 'self-consistent' algorithms that work by solving a large number of simultaneous linear equations. However, there are two major drawbacks: the resulting correction is not exact, and the computation time is excessively long. A computational scheme, as shown in Figure 1, has been devised to eliminate this problem by the deconvolution of the point spread function in the pattern domain. The method is iterative, based on a steepest descent algorithm. The scheme has been successfully tested on a simple pattern with a minimum feature size 0.5 micrometers , exposed on a MEBES tool at 10 KeV in 0.2 micrometers of PMMA resist on a silicon substrate.

  12. SU-D-BRC-01: An Automatic Beam Model Commissioning Method for Monte Carlo Simulations in Pencil-Beam Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, N; Shen, C; Tian, Z

    Purpose: Monte Carlo (MC) simulation is typically regarded as the most accurate dose calculation method for proton therapy. Yet for real clinical cases, the overall accuracy also depends on that of the MC beam model. Commissioning a beam model to faithfully represent a real beam requires finely tuning a set of model parameters, which could be tedious given the large number of pencil beams to commmission. This abstract reports an automatic beam-model commissioning method for pencil-beam scanning proton therapy via an optimization approach. Methods: We modeled a real pencil beam with energy and spatial spread following Gaussian distributions. Mean energy,more » and energy and spatial spread are model parameters. To commission against a real beam, we first performed MC simulations to calculate dose distributions of a set of ideal (monoenergetic, zero-size) pencil beams. Dose distribution for a real pencil beam is hence linear superposition of doses for those ideal pencil beams with weights in the Gaussian form. We formulated the commissioning task as an optimization problem, such that the calculated central axis depth dose and lateral profiles at several depths match corresponding measurements. An iterative algorithm combining conjugate gradient method and parameter fitting was employed to solve the optimization problem. We validated our method in simulation studies. Results: We calculated dose distributions for three real pencil beams with nominal energies 83, 147 and 199 MeV using realistic beam parameters. These data were regarded as measurements and used for commission. After commissioning, average difference in energy and beam spread between determined values and ground truth were 4.6% and 0.2%. With the commissioned model, we recomputed dose. Mean dose differences from measurements were 0.64%, 0.20% and 0.25%. Conclusion: The developed automatic MC beam-model commissioning method for pencil-beam scanning proton therapy can determine beam model parameters with satisfactory accuracy.« less

  13. Selective epidemic vaccination under the performant routing algorithms

    NASA Astrophysics Data System (ADS)

    Bamaarouf, O.; Alweimine, A. Ould Baba; Rachadi, A.; EZ-Zahraouy, H.

    2018-04-01

    Despite the extensive research on traffic dynamics and epidemic spreading, the effect of the routing algorithms strategies on the traffic-driven epidemic spreading has not received an adequate attention. It is well known that more performant routing algorithm strategies are used to overcome the congestion problem. However, our main result shows unexpectedly that these algorithms favor the virus spreading more than the case where the shortest path based algorithm is used. In this work, we studied the virus spreading in a complex network using the efficient path and the global dynamic routing algorithms as compared to shortest path strategy. Some previous studies have tried to modify the routing rules to limit the virus spreading, but at the expense of reducing the traffic transport efficiency. This work proposed a solution to overcome this drawback by using a selective vaccination procedure instead of a random vaccination used often in the literature. We found that the selective vaccination succeeded in eradicating the virus better than a pure random intervention for the performant routing algorithm strategies.

  14. SU-E-I-82: Improving CT Image Quality for Radiation Therapy Using Iterative Reconstruction Algorithms and Slightly Increasing Imaging Doses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noid, G; Chen, G; Tai, A

    2014-06-01

    Purpose: Iterative reconstruction (IR) algorithms are developed to improve CT image quality (IQ) by reducing noise without diminishing spatial resolution or contrast. For CT in radiation therapy (RT), slightly increasing imaging dose to improve IQ may be justified if it can substantially enhance structure delineation. The purpose of this study is to investigate and to quantify the IQ enhancement as a result of increasing imaging doses and using IR algorithms. Methods: CT images were acquired for phantoms, built to evaluate IQ metrics including spatial resolution, contrast and noise, with a variety of imaging protocols using a CT scanner (Definition ASmore » Open, Siemens) installed inside a Linac room. Representative patients were scanned once the protocols were optimized. Both phantom and patient scans were reconstructed using the Sinogram Affirmed Iterative Reconstruction (SAFIRE) and the Filtered Back Projection (FBP) methods. IQ metrics of the obtained CTs were compared. Results: IR techniques are demonstrated to preserve spatial resolution as measured by the point spread function and reduce noise in comparison to traditional FBP. Driven by the reduction in noise, the contrast to noise ratio is doubled by adopting the highest SAFIRE strength. As expected, increasing imaging dose reduces noise for both SAFIRE and FBP reconstructions. The contrast to noise increases from 3 to 5 by increasing the dose by a factor of 4. Similar IQ improvement was observed on the CTs for selected patients with pancreas and prostrate cancers. Conclusion: The IR techniques produce a measurable enhancement to CT IQ by reducing the noise. Increasing imaging dose further reduces noise independent of the IR techniques. The improved CT enables more accurate delineation of tumors and/or organs at risk during RT planning and delivery guidance.« less

  15. A Monte Carlo investigation of contaminant electrons due to a novel in vivo transmission detector.

    PubMed

    Asuni, G; Jensen, J M; McCurdy, B M C

    2011-02-21

    A novel transmission detector (IBA Dosimetry, Germany) developed as an IMRT quality assurance tool, intended for in vivo patient dose measurements, is studied here. The goal of this investigation is to use Monte Carlo techniques to characterize treatment beam parameters in the presence of the detector and to compare to those of a plastic block tray (a frequently used clinical device). Particular attention is paid to the impact of the detector on electron contamination model parameters of two commercial dose calculation algorithms. The linac head together with the COMPASS transmission detector (TRD) was modeled using BEAMnrc code. To understand the effect of the TRD on treatment beams, the contaminant electron fluence, energy spectra, and angular distributions at different SSDs were analyzed for open and non-open (i.e. TRD and block tray) fields. Contaminant electrons in the BEAMnrc simulations were separated according to where they were created. Calculation of surface dose and the evaluation of contributions from contaminant electrons were performed using the DOSXYZnrc user code. The effect of the TRD on contaminant electrons model parameters in Eclipse AAA and Pinnacle(3) dose calculation algorithms was investigated. Comparisons of the fluence of contaminant electrons produced in the non-open fields versus open field show that electrons created in the non-open fields increase at shorter SSD, but most of the electrons at shorter SSD are of low energy with large angular spread. These electrons are out-scattered or absorbed in air and contribute less to surface dose at larger SSD. Calculated surface doses with the block tray are higher than those with the TRD. Contribution of contaminant electrons to dose in the buildup region increases with increasing field size. The additional contribution of electrons to surface dose increases with field size for TRD and block tray. The introduction of the TRD results in a 12% and 15% increase in the Gaussian widths used in the contaminant electron source model of the Eclipse AAA dose algorithm. The off-axis coefficient in the Pinnacle(3) dose calculation algorithm decreases in the presence of TRD compared to without the device. The electron model parameters were modified to reflect the increase in electron contamination with the TRD, a necessary step for accurate beam modeling when using the device.

  16. Experimental designs for detecting synergy and antagonism between two drugs in a pre-clinical study.

    PubMed

    Sperrin, Matthew; Thygesen, Helene; Su, Ting-Li; Harbron, Chris; Whitehead, Anne

    2015-01-01

    The identification of synergistic interactions between combinations of drugs is an important area within drug discovery and development. Pre-clinically, large numbers of screening studies to identify synergistic pairs of compounds can often be ran, necessitating efficient and robust experimental designs. We consider experimental designs for detecting interaction between two drugs in a pre-clinical in vitro assay in the presence of uncertainty of the monotherapy response. The monotherapies are assumed to follow the Hill equation with common lower and upper asymptotes, and a common variance. The optimality criterion used is the variance of the interaction parameter. We focus on ray designs and investigate two algorithms for selecting the optimum set of dose combinations. The first is a forward algorithm in which design points are added sequentially. This is found to give useful solutions in simple cases but can lack robustness when knowledge about the monotherapy parameters is insufficient. The second algorithm is a more pragmatic approach where the design points are constrained to be distributed log-normally along the rays and monotherapy doses. We find that the pragmatic algorithm is more stable than the forward algorithm, and even when the forward algorithm has converged, the pragmatic algorithm can still out-perform it. Practically, we find that good designs for detecting an interaction have equal numbers of points on monotherapies and combination therapies, with those points typically placed in positions where a 50% response is expected. More uncertainty in monotherapy parameters leads to an optimal design with design points that are more spread out. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Assessment of monitor unit limiting strategy using volumetric modulated arc therapy for cancer of hypopharynx.

    PubMed

    Ahamed, Shabbir; Singh, Navin; Gudipudi, Deleep; Mulinti, Suneetha; Talluri, Anil; Soubhagya, Bhudevi; Sresty, Madhusudhana

    2017-03-01

    To quantify relative merit of MU deprived plans against freely optimized plans in terms of plan quality and report changes induced by progressive resolution optimizer algorithm (PRO3) to the dynamic parameters of RapidArc. Ten cases of carcinoma hypopharynx were retrospectively planned in three phases without using MU tool. Replicas of these baseline plans were reoptimized using "Intermediate dose" feature and "MU tool" to reduce MUs by 20%, 35%, and 50%. Overall quality indices for target and OAR, integral dose, dose-volume spread were assessed. All plans were appraised for changes induced in RapidArc dynamic parameters and pre-treatment quality assurance (QA). With increasing MU reduction strength (MURS), MU/Gy values reduced, for all phases with an overall range of 8.6-34.7%; mean dose rate decreased among plans of each phase, phase3 plans recorded greater reductions. MURS20% showed good trade-off between MUs and plan quality. Dose-volume spread below 5Gy was higher for baseline plans while lower between 20 and 35Gy. Integral dose was lower for MURS0%, not exceeding 1.0%, compared against restrained plans. Mean leaf aperture and control point areas increased systematically, correlated negatively with increasing MURS. Absolute delta dose rate variations were least for MURS0%. MU deprived plans exhibited GAI (>93%), better than MURS0% plans. Baseline plans are superior to MU restrained plans. However, MURS20% offers equivalent and acceptable plan quality with mileage of MUs, improved GAI for complex cases. MU tool may be adopted to tailor treatment plans using PRO3. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  18. Dosimetric properties of a proton beamline dedicated to the treatment of ocular disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slopsema, R. L., E-mail: rslopsema@floridaproton.org; Mamalui, M.; Yeung, D.

    2014-01-15

    Purpose: A commercial proton eyeline has been developed to treat ocular disease. Radiotherapy of intraocular lesions (e.g., uveal melanoma, age-related macular degeneration) requires sharp dose gradients to avoid critical structures like the macula and optic disc. A high dose rate is needed to limit patient gazing times during delivery of large fractional dose. Dose delivery needs to be accurate and predictable, not in the least because current treatment planning algorithms have limited dose modeling capabilities. The purpose of this paper is to determine the dosimetric properties of a new proton eyeline. These properties are compared to those of existing systemsmore » and evaluated in the context of the specific clinical requirements of ocular treatments. Methods: The eyeline is part of a high-energy, cyclotron-based proton therapy system. The energy at the entrance of the eyeline is 105 MeV. A range modulator (RM) wheel generates the spread-out Bragg peak, while a variable range shifter system adjusts the range and spreads the beam laterally. The range can be adjusted from 0.5 up to 3.4 g/cm{sup 2}; the modulation width can be varied in steps of 0.3 g/cm{sup 2} or less. Maximum field diameter is 2.5 cm. All fields can be delivered with a dose rate of 30 Gy/min or more. The eyeline is calibrated according to the IAEA TRS-398 protocol using a cylindrical ionization chamber. Depth dose distributions and dose/MU are measured with a parallel-plate ionization chamber; lateral profiles with radiochromic film. The dose/MU is modeled as a function of range, modulation width, and instantaneous MU rate with fit parameters determined per option (RM wheel). Results: The distal fall-off of the spread-out Bragg peak is 0.3 g/cm{sup 2}, larger than for most existing systems. The lateral penumbra varies between 0.9 and 1.4 mm, except for fully modulated fields that have a larger penumbra at skin. The source-to-axis distance is found to be 169 cm. The dose/MU shows a strong dependence on range (up to 4%/mm). A linear increase in dose/MU as a function of instantaneous MU rate is observed. The dose/MU model describes the measurements with an accuracy of ±2%. Neutron dose is found to be 146 ± 102 μSv/Gy at the contralateral eye and 19 ± 13 μSv/Gy at the chest. Conclusions: Measurements show the proton eyeline meets the requirements to effectively treat ocular disease.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slopsema, R. L., E-mail: rslopsema@floridaproton.org; Mamalui, M.; Yeung, D.

    Purpose: A commercial proton eyeline has been developed to treat ocular disease. Radiotherapy of intraocular lesions (e.g., uveal melanoma, age-related macular degeneration) requires sharp dose gradients to avoid critical structures like the macula and optic disc. A high dose rate is needed to limit patient gazing times during delivery of large fractional dose. Dose delivery needs to be accurate and predictable, not in the least because current treatment planning algorithms have limited dose modeling capabilities. The purpose of this paper is to determine the dosimetric properties of a new proton eyeline. These properties are compared to those of existing systemsmore » and evaluated in the context of the specific clinical requirements of ocular treatments. Methods: The eyeline is part of a high-energy, cyclotron-based proton therapy system. The energy at the entrance of the eyeline is 105 MeV. A range modulator (RM) wheel generates the spread-out Bragg peak, while a variable range shifter system adjusts the range and spreads the beam laterally. The range can be adjusted from 0.5 up to 3.4 g/cm{sup 2}; the modulation width can be varied in steps of 0.3 g/cm{sup 2} or less. Maximum field diameter is 2.5 cm. All fields can be delivered with a dose rate of 30 Gy/min or more. The eyeline is calibrated according to the IAEA TRS-398 protocol using a cylindrical ionization chamber. Depth dose distributions and dose/MU are measured with a parallel-plate ionization chamber; lateral profiles with radiochromic film. The dose/MU is modeled as a function of range, modulation width, and instantaneous MU rate with fit parameters determined per option (RM wheel). Results: The distal fall-off of the spread-out Bragg peak is 0.3 g/cm{sup 2}, larger than for most existing systems. The lateral penumbra varies between 0.9 and 1.4 mm, except for fully modulated fields that have a larger penumbra at skin. The source-to-axis distance is found to be 169 cm. The dose/MU shows a strong dependence on range (up to 4%/mm). A linear increase in dose/MU as a function of instantaneous MU rate is observed. The dose/MU model describes the measurements with an accuracy of ±2%. Neutron dose is found to be 146 ± 102 μSv/Gy at the contralateral eye and 19 ± 13 μSv/Gy at the chest. Conclusions: Measurements show the proton eyeline meets the requirements to effectively treat ocular disease.« less

  20. Tetrahydrocannabinol-induced suppression of macrophage spreading and phagocytic activity in vitro.

    PubMed

    Lopez-Cepero, M; Friedman, M; Klein, T; Friedman, H

    1986-06-01

    The effects of tetrahydrocannabinol (THC) on several parameters of macrophage function in vitro were assessed. Delta 9 THC added to cultures of normal mouse peritoneal cells in vitro affected the ability of the cells to spread on glass surfaces and also had some effect on their ability to phagocytize yeast. These effects were dose related. A concentration of 20 micrograms of THC almost completely inhibited macrophage spreading, but it also decreased viability and the total number of these cells. Doses of 10 or 5 micrograms of THC also inhibited spreading but had little effect on cell viability or number. A dose of 1.0 microgram of THC had some inhibitory effect on spreading and the lowest dose affecting spreading appeared to be about 0.05 micrograms per culture. Higher doses of THC were necessary to inhibit phagocytosis of yeast particles as determined by direct microscopic examination or use of radiolabeled yeast as the test particles. These results indicate that several readily measured functions of macrophages may be suppressed by THC.

  1. Using HFire for spatial modeling of fire in shrublands

    Treesearch

    Seth H. Peterson; Marco E. Morais; Jean M. Carlson; Philip E. Dennison; Dar A. Roberts; Max A. Moritz; David R. Weise

    2009-01-01

    An efficient raster fire-spread model named HFire is introduced. HFire can simulate single-fire events or long-term fire regimes, using the same fire-spread algorithm. This paper describes the HFire algorithm, benchmarks the model using a standard set of tests developed for FARSITE, and compares historical and predicted fire spread perimeters for three southern...

  2. Epidemic spreading in annealed directed networks: susceptible-infected-susceptible model and contact process.

    PubMed

    Kwon, Sungchul; Kim, Yup

    2013-01-01

    We investigate epidemic spreading in annealed directed scale-free networks with the in-degree (k) distribution P(in)(k)~k(-γ(in)) and the out-degree (ℓ) distribution, P(out)(ℓ)~ℓ(-γ(out)). The correlation of each node on the networks is controlled by the probability r(0≤r≤1) in two different algorithms, the so-called k and ℓ algorithms. For r=1, the k algorithm gives =, whereas the ℓ algorithm gives =<ℓ(2)>. For r=0, =<ℓ> for both algorithms. As the prototype of epidemic spreading, the susceptible-infected-susceptible model and contact process on the networks are analyzed using the heterogeneous mean-field theory and Monte Carlo simulations. The directedness of links and the correlation of the network are found to play important roles in the spreading, so that critical behaviors of both models are distinct from those on undirected scale-free networks.

  3. Energy spectrum control for modulated proton beams.

    PubMed

    Hsi, Wen C; Moyers, Michael F; Nichiporov, Dmitri; Anferov, Vladimir; Wolanski, Mark; Allgower, Chris E; Farr, Jonathan B; Mascia, Anthony E; Schreuder, Andries N

    2009-06-01

    In proton therapy delivered with range modulated beams, the energy spectrum of protons entering the delivery nozzle can affect the dose uniformity within the target region and the dose gradient around its periphery. For a cyclotron with a fixed extraction energy, a rangeshifter is used to change the energy but this produces increasing energy spreads for decreasing energies. This study investigated the magnitude of the effects of different energy spreads on dose uniformity and distal edge dose gradient and determined the limits for controlling the incident spectrum. A multilayer Faraday cup (MLFC) was calibrated against depth dose curves measured in water for nonmodulated beams with various incident spectra. Depth dose curves were measured in a water phantom and in a multilayer ionization chamber detector for modulated beams using different incident energy spreads. Some nozzle entrance energy spectra can produce unacceptable dose nonuniformities of up to +/-21% over the modulated region. For modulated beams and small beam ranges, the width of the distal penumbra can vary by a factor of 2.5. When the energy spread was controlled within the defined limits, the dose nonuniformity was less than +/-3%. To facilitate understanding of the results, the data were compared to the measured and Monte Carlo calculated data from a variable extraction energy synchrotron which has a narrow spectrum for all energies. Dose uniformity is only maintained within prescription limits when the energy spread is controlled. At low energies, a large spread can be beneficial for extending the energy range at which a single range modulator device can be used. An MLFC can be used as part of a feedback to provide specified energy spreads for different energies.

  4. Commissioning dose computation models for spot scanning proton beams in water for a commercially available treatment planning system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, X. R.; Poenisch, F.; Lii, M.

    2013-04-15

    Purpose: To present our method and experience in commissioning dose models in water for spot scanning proton therapy in a commercial treatment planning system (TPS). Methods: The input data required by the TPS included in-air transverse profiles and integral depth doses (IDDs). All input data were obtained from Monte Carlo (MC) simulations that had been validated by measurements. MC-generated IDDs were converted to units of Gy mm{sup 2}/MU using the measured IDDs at a depth of 2 cm employing the largest commercially available parallel-plate ionization chamber. The sensitive area of the chamber was insufficient to fully encompass the entire lateralmore » dose deposited at depth by a pencil beam (spot). To correct for the detector size, correction factors as a function of proton energy were defined and determined using MC. The fluence of individual spots was initially modeled as a single Gaussian (SG) function and later as a double Gaussian (DG) function. The DG fluence model was introduced to account for the spot fluence due to contributions of large angle scattering from the devices within the scanning nozzle, especially from the spot profile monitor. To validate the DG fluence model, we compared calculations and measurements, including doses at the center of spread out Bragg peaks (SOBPs) as a function of nominal field size, range, and SOBP width, lateral dose profiles, and depth doses for different widths of SOBP. Dose models were validated extensively with patient treatment field-specific measurements. Results: We demonstrated that the DG fluence model is necessary for predicting the field size dependence of dose distributions. With this model, the calculated doses at the center of SOBPs as a function of nominal field size, range, and SOBP width, lateral dose profiles and depth doses for rectangular target volumes agreed well with respective measured values. With the DG fluence model for our scanning proton beam line, we successfully treated more than 500 patients from March 2010 through June 2012 with acceptable agreement between TPS calculated and measured dose distributions. However, the current dose model still has limitations in predicting field size dependence of doses at some intermediate depths of proton beams with high energies. Conclusions: We have commissioned a DG fluence model for clinical use. It is demonstrated that the DG fluence model is significantly more accurate than the SG fluence model. However, some deficiencies in modeling the low-dose envelope in the current dose algorithm still exist. Further improvements to the current dose algorithm are needed. The method presented here should be useful for commissioning pencil beam dose algorithms in new versions of TPS in the future.« less

  5. Commissioning dose computation models for spot scanning proton beams in water for a commercially available treatment planning system

    PubMed Central

    Zhu, X. R.; Poenisch, F.; Lii, M.; Sawakuchi, G. O.; Titt, U.; Bues, M.; Song, X.; Zhang, X.; Li, Y.; Ciangaru, G.; Li, H.; Taylor, M. B.; Suzuki, K.; Mohan, R.; Gillin, M. T.; Sahoo, N.

    2013-01-01

    Purpose: To present our method and experience in commissioning dose models in water for spot scanning proton therapy in a commercial treatment planning system (TPS). Methods: The input data required by the TPS included in-air transverse profiles and integral depth doses (IDDs). All input data were obtained from Monte Carlo (MC) simulations that had been validated by measurements. MC-generated IDDs were converted to units of Gy mm2/MU using the measured IDDs at a depth of 2 cm employing the largest commercially available parallel-plate ionization chamber. The sensitive area of the chamber was insufficient to fully encompass the entire lateral dose deposited at depth by a pencil beam (spot). To correct for the detector size, correction factors as a function of proton energy were defined and determined using MC. The fluence of individual spots was initially modeled as a single Gaussian (SG) function and later as a double Gaussian (DG) function. The DG fluence model was introduced to account for the spot fluence due to contributions of large angle scattering from the devices within the scanning nozzle, especially from the spot profile monitor. To validate the DG fluence model, we compared calculations and measurements, including doses at the center of spread out Bragg peaks (SOBPs) as a function of nominal field size, range, and SOBP width, lateral dose profiles, and depth doses for different widths of SOBP. Dose models were validated extensively with patient treatment field-specific measurements. Results: We demonstrated that the DG fluence model is necessary for predicting the field size dependence of dose distributions. With this model, the calculated doses at the center of SOBPs as a function of nominal field size, range, and SOBP width, lateral dose profiles and depth doses for rectangular target volumes agreed well with respective measured values. With the DG fluence model for our scanning proton beam line, we successfully treated more than 500 patients from March 2010 through June 2012 with acceptable agreement between TPS calculated and measured dose distributions. However, the current dose model still has limitations in predicting field size dependence of doses at some intermediate depths of proton beams with high energies. Conclusions: We have commissioned a DG fluence model for clinical use. It is demonstrated that the DG fluence model is significantly more accurate than the SG fluence model. However, some deficiencies in modeling the low-dose envelope in the current dose algorithm still exist. Further improvements to the current dose algorithm are needed. The method presented here should be useful for commissioning pencil beam dose algorithms in new versions of TPS in the future. PMID:23556893

  6. Commissioning dose computation models for spot scanning proton beams in water for a commercially available treatment planning system.

    PubMed

    Zhu, X R; Poenisch, F; Lii, M; Sawakuchi, G O; Titt, U; Bues, M; Song, X; Zhang, X; Li, Y; Ciangaru, G; Li, H; Taylor, M B; Suzuki, K; Mohan, R; Gillin, M T; Sahoo, N

    2013-04-01

    To present our method and experience in commissioning dose models in water for spot scanning proton therapy in a commercial treatment planning system (TPS). The input data required by the TPS included in-air transverse profiles and integral depth doses (IDDs). All input data were obtained from Monte Carlo (MC) simulations that had been validated by measurements. MC-generated IDDs were converted to units of Gy mm(2)/MU using the measured IDDs at a depth of 2 cm employing the largest commercially available parallel-plate ionization chamber. The sensitive area of the chamber was insufficient to fully encompass the entire lateral dose deposited at depth by a pencil beam (spot). To correct for the detector size, correction factors as a function of proton energy were defined and determined using MC. The fluence of individual spots was initially modeled as a single Gaussian (SG) function and later as a double Gaussian (DG) function. The DG fluence model was introduced to account for the spot fluence due to contributions of large angle scattering from the devices within the scanning nozzle, especially from the spot profile monitor. To validate the DG fluence model, we compared calculations and measurements, including doses at the center of spread out Bragg peaks (SOBPs) as a function of nominal field size, range, and SOBP width, lateral dose profiles, and depth doses for different widths of SOBP. Dose models were validated extensively with patient treatment field-specific measurements. We demonstrated that the DG fluence model is necessary for predicting the field size dependence of dose distributions. With this model, the calculated doses at the center of SOBPs as a function of nominal field size, range, and SOBP width, lateral dose profiles and depth doses for rectangular target volumes agreed well with respective measured values. With the DG fluence model for our scanning proton beam line, we successfully treated more than 500 patients from March 2010 through June 2012 with acceptable agreement between TPS calculated and measured dose distributions. However, the current dose model still has limitations in predicting field size dependence of doses at some intermediate depths of proton beams with high energies. We have commissioned a DG fluence model for clinical use. It is demonstrated that the DG fluence model is significantly more accurate than the SG fluence model. However, some deficiencies in modeling the low-dose envelope in the current dose algorithm still exist. Further improvements to the current dose algorithm are needed. The method presented here should be useful for commissioning pencil beam dose algorithms in new versions of TPS in the future.

  7. Alanine/EPR dosimetry applied to the verification of a total body irradiation protocol and treatment planning dose calculation using a humanoid phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaeken, B.; Lelie, S.; Meijnders, P.

    2010-12-15

    Purpose: To avoid complications in total body irradiation (TBI), it is important to achieve a homogeneous dose distribution throughout the body and to deliver a correct dose to the lung which is an organ at risk. The purpose of this work was to validate the TBI dose protocol and to check the accuracy of the 3D dose calculations of the treatment planning system. Methods: Dosimetry based on alanine/electron paramagnetic resonance (EPR) was used to measure dose at numerous locations within an anthropomorphic phantom (Alderson) that was irradiated in a clinical TBI beam setup. The alanine EPR dosimetry system was calibratedmore » against water calorimetry in a Co-60 beam and the absorbed dose was determined by the use of ''dose-normalized amplitudes'' A{sub D}. The dose rate of the TBI beam was checked against a Farmer ionization chamber. The phantom measurements were compared to 3D dose calculations from a treatment planning system (Pinnacle) modeled for standard dose calculations. Results: Alanine dosimetry allowed accurate measurements which were in accordance with ionization chamber measurements. The combined relative standard measurement uncertainty in the Alderson phantom was U{sub r}(A{sub D})=0.6%. The humanoid phantom was irradiated to a reference dose of 10 Gy, limiting the lung dose to 7.5 Gy. The ratio of the average measured dose midplane in the craniocaudal direction to the reference dose was 1.001 with a spread of {+-}4.7% (1 sd). Dose to the lung was measured in 26 locations and found, in average, 1.8% lower than expected. Lung dose was homogeneous in the ventral-dorsal direction but a dose gradient of 0.10 Gy cm{sup -1} was observed in the craniocaudal direction midline within the lung lobe. 3D dose calculations (Pinnacle) were found, in average, 2% lower compared to dose measurements on the body axis and 3% lower for the lungs. Conclusions: The alanine/EPR dosimetry system allowed accurate dose measurements which enabled the authors to validate their TBI dose protocol. Dose calculations based on a collapsed cone convolution dose algorithm modeled for regular treatments are accurate within 3% and can further be improved when the algorithm is modeled for TBI.« less

  8. GTV-based prescription in SBRT for lung lesions using advanced dose calculation algorithms.

    PubMed

    Lacornerie, Thomas; Lisbona, Albert; Mirabel, Xavier; Lartigau, Eric; Reynaert, Nick

    2014-10-16

    The aim of current study was to investigate the way dose is prescribed to lung lesions during SBRT using advanced dose calculation algorithms that take into account electron transport (type B algorithms). As type A algorithms do not take into account secondary electron transport, they overestimate the dose to lung lesions. Type B algorithms are more accurate but still no consensus is reached regarding dose prescription. The positive clinical results obtained using type A algorithms should be used as a starting point. In current work a dose-calculation experiment is performed, presenting different prescription methods. Three cases with three different sizes of peripheral lung lesions were planned using three different treatment platforms. For each individual case 60 Gy to the PTV was prescribed using a type A algorithm and the dose distribution was recalculated using a type B algorithm in order to evaluate the impact of the secondary electron transport. Secondly, for each case a type B algorithm was used to prescribe 48 Gy to the PTV, and the resulting doses to the GTV were analyzed. Finally, prescriptions based on specific GTV dose volumes were evaluated. When using a type A algorithm to prescribe the same dose to the PTV, the differences regarding median GTV doses among platforms and cases were always less than 10% of the prescription dose. The prescription to the PTV based on type B algorithms, leads to a more important variability of the median GTV dose among cases and among platforms, (respectively 24%, and 28%). However, when 54 Gy was prescribed as median GTV dose, using a type B algorithm, the variability observed was minimal. Normalizing the prescription dose to the median GTV dose for lung lesions avoids variability among different cases and treatment platforms of SBRT when type B algorithms are used to calculate the dose. The combination of using a type A algorithm to optimize a homogeneous dose in the PTV and using a type B algorithm to prescribe the median GTV dose provides a very robust method for treating lung lesions.

  9. Experimental Investigation of the Performance of Image Registration and De-aliasing Algorithms

    DTIC Science & Technology

    2009-09-01

    spread function In the literature these types of algorithms are sometimes hcluded under the broad umbrella of superresolution . However, in the current...We use one of these patterns to visually demonstrate successful de-aliasing 15. SUBJECT TERMS Image de-aliasing Superresolution Microscanning Image...undersampled point spread function. In the literature these types of algorithms are sometimes included under the broad umbrella of superresolution . However, in

  10. Blind deconvolution post-processing of images corrected by adaptive optics

    NASA Astrophysics Data System (ADS)

    Christou, Julian C.

    1995-08-01

    Experience with the adaptive optics system at the Starfire Optical Range has shown that the point spread function is non-uniform and varies both spatially and temporally as well as being object dependent. Because of this, the application of a standard linear and non-linear deconvolution algorithms make it difficult to deconvolve out the point spread function. In this paper we demonstrate the application of a blind deconvolution algorithm to adaptive optics compensated data where a separate point spread function is not needed.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, C; Liu, H; Indiana University Bloomington, Bloomington, IN

    Purpose: A rapid cycling proton beam has several distinct characteristics superior to a slow extraction synchrotron: The beam energy and energy spread, beam intensity and spot size can be varied spot by spot. The feasibility of using a spot scanning beam from a rapidc-ycling-medical-synchrotron (RCMS) at 10 Hz repetition frequency is investigated in this study for its application in proton therapy. Methods: The versatility of the beam is illustrated by two examples in water phantoms: (1) a cylindrical PTV irradiated by a single field and (2) a spherical PTV irradiated by two parallel opposed fields. A uniform dose distribution ismore » to be delivered to the volumes. Geant4 Monte Carlo code is used to validate the dose distributions in each example. Results: Transverse algorithms are developed to produce uniform distributions in each transverseplane in the two examples with a cylindrical and a spherical PTV respectively. Longitudinally, different proton energies are used in successive transverse planes toproduce the SOBP required to cover the PTVs. In general, uniformity of dosedistribution within 3% is obtained for the cylinder and 3.5% for the sphere. The transversealgorithms requires only few hundred beam spots for each plane The algorithms may beapplied to larger volumes by increasing the intensity spot by spot for the same deliverytime of the same dose. The treatment time can be shorter than 1 minute for any fieldconfiguration and tumor shape. Conclusion: The unique beam characteristics of a spot scanning beam from a RCMS at 10 Hz repetitionfrequency are used to design transverse and longitudinal algorithms to produce uniformdistribution for any arbitrary shape and size of targets. The proposed spot scanning beam ismore versatile than existing spot scanning beams in proton therapy with better beamcontrol and lower neutron dose. This work is supported in part by grants from the US Department of Energy under contract; DE-FG02-12ER41800 and the National Science Foundation NSF PHY-1205431.« less

  12. An Improved Adaptive model for Information Recommending and Spreading

    NASA Astrophysics Data System (ADS)

    Chen, Duan-Bing; Gao, Hui

    2012-04-01

    People in the Internet era have to cope with information overload and expend great effort on finding what they need. Recent experiments indicate that recommendations based on users' past activities are usually less favored than those based on social relationships, and thus many researchers have proposed adaptive algorithms on social recommendation. However, in those methods, quite a number of users have little chance to recommend information, which might prevent valuable information from spreading. We present an improved algorithm that allows more users to have enough followers to spread information. Experimental results demonstrate that both recommendation precision and spreading effectiveness of our method can be improved significantly.

  13. A ripple-spreading genetic algorithm for the aircraft sequencing problem.

    PubMed

    Hu, Xiao-Bing; Di Paolo, Ezequiel A

    2011-01-01

    When genetic algorithms (GAs) are applied to combinatorial problems, permutation representations are usually adopted. As a result, such GAs are often confronted with feasibility and memory-efficiency problems. With the aircraft sequencing problem (ASP) as a study case, this paper reports on a novel binary-representation-based GA scheme for combinatorial problems. Unlike existing GAs for the ASP, which typically use permutation representations based on aircraft landing order, the new GA introduces a novel ripple-spreading model which transforms the original landing-order-based ASP solutions into value-based ones. In the new scheme, arriving aircraft are projected as points into an artificial space. A deterministic method inspired by the natural phenomenon of ripple-spreading on liquid surfaces is developed, which uses a few parameters as input to connect points on this space to form a landing sequence. A traditional GA, free of feasibility and memory-efficiency problems, can then be used to evolve the ripple-spreading related parameters in order to find an optimal sequence. Since the ripple-spreading model is the centerpiece of the new algorithm, it is called the ripple-spreading GA (RSGA). The advantages of the proposed RSGA are illustrated by extensive comparative studies for the case of the ASP.

  14. SU-F-T-153: Experimental Validation and Calculation Benchmark for a Commercial Monte Carlo Pencil Beam Scanning Proton Therapy Treatment Planning System in Water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, L; Huang, S; Kang, M

    Purpose: Eclipse proton Monte Carlo AcurosPT 13.7 was commissioned and experimentally validated for an IBA dedicated PBS nozzle in water. Topas 1.3 was used to isolate the cause of differences in output and penumbra between simulation and experiment. Methods: The spot profiles were measured in air at five locations using Lynx. PTW-34070 Bragg peak chamber (Freiburg, Germany) was used to collect the relative integral Bragg peak for 15 proton energies from 100 MeV to 225 MeV. The phase space parameters (σx, σθ, ρxθ) number of protons per MU, energy spread and calculated mean energy provided by AcurosPT were identically implementedmore » into Topas. The absolute dose, profiles and field size factors measured using ionization chamber arrays were compared with both AcurosPT and Topas. Results: The beam spot size, σx, and the angular spread, σθ, in air were both energy-dependent: in particular, the spot size in air at isocentre ranged from 2.8 to 5.3 mm, and the angular spread ranged from 2.7 mrad to 6 mrad. The number of protons per MU increased from ∼9E7 at 100 MeV to ∼1.5E8 at 225 MeV. Both AcurosPT and TOPAS agree with experiment within 2 mm penumbra difference or 3% dose difference for scenarios including central axis depth dose and profiles at two depths in multi-spot square fields, from 40 to 200 mm, for all the investigated single-energy and multi-energy beams, indicating clinically acceptable source model and radiation transport algorithm in water. Conclusion: By comparing measured data and TOPAS simulation using the same source model, the AcurosPT 13.7 was validated in water within 2 mm penumbra difference or 3% dose difference. Benchmarks versus an independent Monte Carlo code are recommended to study the agreement in output, filed size factors and penumbra differences. This project is partially supported by the Varian grant under the master agreement between University of Pennsylvania and Varian.« less

  15. Improved Collaborative Filtering Algorithm via Information Transformation

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Wang, Bing-Hong; Guo, Qiang

    In this paper, we propose a spreading activation approach for collaborative filtering (SA-CF). By using the opinion spreading process, the similarity between any users can be obtained. The algorithm has remarkably higher accuracy than the standard collaborative filtering using the Pearson correlation. Furthermore, we introduce a free parameter β to regulate the contributions of objects to user-user correlations. The numerical results indicate that decreasing the influence of popular objects can further improve the algorithmic accuracy and personality. We argue that a better algorithm should simultaneously require less computation and generate higher accuracy. Accordingly, we further propose an algorithm involving only the top-N similar neighbors for each target user, which has both less computational complexity and higher algorithmic accuracy.

  16. Comparing fire spread algorithms using equivalence testing and neutral landscape models

    Treesearch

    Brian R. Miranda; Brian R. Sturtevant; Jian Yang; Eric J. Gustafson

    2009-01-01

    We demonstrate a method to evaluate the degree to which a meta-model approximates spatial disturbance processes represented by a more detailed model across a range of landscape conditions, using neutral landscapes and equivalence testing. We illustrate this approach by comparing burn patterns produced by a relatively simple fire spread algorithm with those generated by...

  17. Dose calculation accuracy of the Monte Carlo algorithm for CyberKnife compared with other commercially available dose calculation algorithms.

    PubMed

    Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny

    2011-01-01

    Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required. Copyright © 2011 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  18. Simulation-Based Evaluation of Dose-Titration Algorithms for Rapid-Acting Insulin in Subjects with Type 2 Diabetes Mellitus Inadequately Controlled on Basal Insulin and Oral Antihyperglycemic Medications.

    PubMed

    Ma, Xiaosu; Chien, Jenny Y; Johnson, Jennal; Malone, James; Sinha, Vikram

    2017-08-01

    The purpose of this prospective, model-based simulation approach was to evaluate the impact of various rapid-acting mealtime insulin dose-titration algorithms on glycemic control (hemoglobin A1c [HbA1c]). Seven stepwise, glucose-driven insulin dose-titration algorithms were evaluated with a model-based simulation approach by using insulin lispro. Pre-meal blood glucose readings were used to adjust insulin lispro doses. Two control dosing algorithms were included for comparison: no insulin lispro (basal insulin+metformin only) or insulin lispro with fixed doses without titration. Of the seven dosing algorithms assessed, daily adjustment of insulin lispro dose, when glucose targets were met at pre-breakfast, pre-lunch, and pre-dinner, sequentially, demonstrated greater HbA1c reduction at 24 weeks, compared with the other dosing algorithms. Hypoglycemic rates were comparable among the dosing algorithms except for higher rates with the insulin lispro fixed-dose scenario (no titration), as expected. The inferior HbA1c response for the "basal plus metformin only" arm supports the additional glycemic benefit with prandial insulin lispro. Our model-based simulations support a simplified dosing algorithm that does not include carbohydrate counting, but that includes glucose targets for daily dose adjustment to maintain glycemic control with a low risk of hypoglycemia.

  19. Warfarin Dosing Algorithms Underpredict Dose Requirements in Patients Requiring ≥7 mg Daily: A Systematic Review and Meta-analysis.

    PubMed

    Saffian, S M; Duffull, S B; Wright, Dfb

    2017-08-01

    There is preliminary evidence to suggest that some published warfarin dosing algorithms produce biased maintenance dose predictions in patients who require higher than average doses. We conducted a meta-analysis of warfarin dosing algorithms to determine if there exists a systematic under- or overprediction of dose requirements for patients requiring ≥7 mg/day across published algorithms. Medline and Embase databases were searched up to September 2015. We quantified the proportion of over- and underpredicted doses in patients whose observed maintenance dose was ≥7 mg/day. The meta-analysis included 47 evaluations of 22 different warfarin dosing algorithms from 16 studies. The meta-analysis included data from 1,492 patients who required warfarin doses of ≥7 mg/day. All 22 algorithms were found to underpredict warfarin dosing requirements in patients who required ≥7 mg/day by an average of 2.3 mg/day with a pooled estimate of underpredicted doses of 92.3% (95% confidence interval 90.3-94.1, I 2 = 24%). © 2017 American Society for Clinical Pharmacology and Therapeutics.

  20. A novel acenocoumarol pharmacogenomic dosing algorithm for the Greek population of EU-PACT trial.

    PubMed

    Ragia, Georgia; Kolovou, Vana; Kolovou, Genovefa; Konstantinides, Stavros; Maltezos, Efstratios; Tavridou, Anna; Tziakas, Dimitrios; Maitland-van der Zee, Anke H; Manolopoulos, Vangelis G

    2017-01-01

    To generate and validate a pharmacogenomic-guided (PG) dosing algorithm for acenocoumarol in the Greek population. To compare its performance with other PG algorithms developed for the Greek population. A total of 140 Greek patients participants of the EU-PACT trial for acenocoumarol, a randomized clinical trial that prospectively compared the effect of a PG dosing algorithm with a clinical dosing algorithm on the percentage of time within INR therapeutic range, who reached acenocoumarol stable dose were included in the study. CYP2C9 and VKORC1 genotypes, age and weight affected acenocoumarol dose and predicted 53.9% of its variability. EU-PACT PG algorithm overestimated acenocoumarol dose across all different CYP2C9/VKORC1 functional phenotype bins (predicted dose vs stable dose in normal responders 2.31 vs 2.00 mg/day, p = 0.028, in sensitive responders 1.72 vs 1.50 mg/day, p = 0.003, in highly sensitive responders 1.39 vs 1.00 mg/day, p = 0.029). The PG algorithm previously developed for the Greek population overestimated the dose in normal responders (2.51 vs 2.00 mg/day, p < 0.001). Ethnic-specific dosing algorithm is suggested for better prediction of acenocoumarol dosage requirements in patients of Greek origin.

  1. Designing a range modulator wheel to spread-out the Bragg peak for a passive proton therapy facility

    NASA Astrophysics Data System (ADS)

    Jia, S. Bijan; Romano, F.; Cirrone, Giuseppe A. P.; Cuttone, G.; Hadizadeh, M. H.; Mowlavi, A. A.; Raffaele, L.

    2016-01-01

    In proton beam therapy, a Spread-Out Bragg peak (SOBP) is used to establish a uniform dose distribution in the target volume. In order to create a SOBP, several Bragg peaks of different ranges, corresponding to different entrance energies, with certain intensities (weights) should be combined each other. In a passive beam scattering system, the beam is usually extracted from a cyclotron at a constant energy throughout a treatment. Therefore, a SOBP is produced by a range modulator wheel, which is basically a rotating wheel with steps of variable thicknesses, or by using the ridge filters. In this study, we used the Geant4 toolkit to simulate a typical passive scattering beam line. In particular, the CATANA transport beam line of INFN Laboratori Nazionali del Sud (LNS) in Catania has been reproduced in this work. Some initial properties of the entrance beam have been checked by benchmarking simulations with experimental data. A class dedicated to the simulation of the wheel modulators has been implemented. It has been designed in order to be easily modified for simulating any desired modulator wheel and, hence, any suitable beam modulation. By using some auxiliary range-shifters, a set of pristine Bragg peaks was obtained from the simulations. A mathematical algorithm was developed, using the simulated pristine dose profiles as its input, to calculate the weight of each pristine peak, reproduce the SOBP, and finally generate a flat dose distribution. Therefore, once the designed modulator has been realized, it has been tested at CATANA facility, comparing the experimental data with the simulation results.

  2. Influence of different dose calculation algorithms on the estimate of NTCP for lung complications.

    PubMed

    Hedin, Emma; Bäck, Anna

    2013-09-06

    Due to limitations and uncertainties in dose calculation algorithms, different algorithms can predict different dose distributions and dose-volume histograms for the same treatment. This can be a problem when estimating the normal tissue complication probability (NTCP) for patient-specific dose distributions. Published NTCP model parameters are often derived for a different dose calculation algorithm than the one used to calculate the actual dose distribution. The use of algorithm-specific NTCP model parameters can prevent errors caused by differences in dose calculation algorithms. The objective of this work was to determine how to change the NTCP model parameters for lung complications derived for a simple correction-based pencil beam dose calculation algorithm, in order to make them valid for three other common dose calculation algorithms. NTCP was calculated with the relative seriality (RS) and Lyman-Kutcher-Burman (LKB) models. The four dose calculation algorithms used were the pencil beam (PB) and collapsed cone (CC) algorithms employed by Oncentra, and the pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) employed by Eclipse. Original model parameters for lung complications were taken from four published studies on different grades of pneumonitis, and new algorithm-specific NTCP model parameters were determined. The difference between original and new model parameters was presented in relation to the reported model parameter uncertainties. Three different types of treatments were considered in the study: tangential and locoregional breast cancer treatment and lung cancer treatment. Changing the algorithm without the derivation of new model parameters caused changes in the NTCP value of up to 10 percentage points for the cases studied. Furthermore, the error introduced could be of the same magnitude as the confidence intervals of the calculated NTCP values. The new NTCP model parameters were tabulated as the algorithm was varied from PB to PBC, AAA, or CC. Moving from the PB to the PBC algorithm did not require new model parameters; however, moving from PB to AAA or CC did require a change in the NTCP model parameters, with CC requiring the largest change. It was shown that the new model parameters for a given algorithm are different for the different treatment types.

  3. Comparison of the Performance of the Warfarin Pharmacogenetics Algorithms in Patients with Surgery of Heart Valve Replacement and Heart Valvuloplasty.

    PubMed

    Xu, Hang; Su, Shi; Tang, Wuji; Wei, Meng; Wang, Tao; Wang, Dongjin; Ge, Weihong

    2015-09-01

    A large number of warfarin pharmacogenetics algorithms have been published. Our research was aimed to evaluate the performance of the selected pharmacogenetic algorithms in patients with surgery of heart valve replacement and heart valvuloplasty during the phase of initial and stable anticoagulation treatment. 10 pharmacogenetic algorithms were selected by searching PubMed. We compared the performance of the selected algorithms in a cohort of 193 patients during the phase of initial and stable anticoagulation therapy. Predicted dose was compared to therapeutic dose by using a predicted dose percentage that falls within 20% threshold of the actual dose (percentage within 20%) and mean absolute error (MAE). The average warfarin dose for patients was 3.05±1.23mg/day for initial treatment and 3.45±1.18mg/day for stable treatment. The percentages of the predicted dose within 20% of the therapeutic dose were 44.0±8.8% and 44.6±9.7% for the initial and stable phases, respectively. The MAEs of the selected algorithms were 0.85±0.18mg/day and 0.93±0.19mg/day, respectively. All algorithms had better performance in the ideal group than in the low dose and high dose groups. The only exception is the Wadelius et al. algorithm, which had better performance in the high dose group. The algorithms had similar performance except for the Wadelius et al. and Miao et al. algorithms, which had poor accuracy in our study cohort. The Gage et al. algorithm had better performance in both phases of initial and stable treatment. Algorithms had relatively higher accuracy in the >50years group of patients on the stable phase. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. HF band filter bank multi-carrier spread spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laraway, Stephen Andrew; Moradi, Hussein; Farhang-Boroujeny, Behrouz

    Abstract—This paper describes modifications to the filter bank multicarrier spread spectrum (FB-MC-SS) system, that was presented in [1] and [2], to enable transmission of this waveform in the HF skywave channel. FB-MC-SS is well suited for the HF channel because it performs well in channels with frequency selective fading and interference. This paper describes new algorithms for packet detection, timing recovery and equalization that are suitable for the HF channel. Also, an algorithm for optimizing the peak to average power ratio (PAPR) of the FBMC- SS waveform is presented. Application of this algorithm results in a waveform with low PAPR.more » Simulation results using a wide band HF channel model demonstrate the robustness of this system over a wide range of delay and Doppler spreads.« less

  5. A new algorithm for epilepsy seizure onset detection and spread estimation from EEG signals

    NASA Astrophysics Data System (ADS)

    Quintero-Rincón, Antonio; Pereyra, Marcelo; D'Giano, Carlos; Batatia, Hadj; Risk, Marcelo

    2016-04-01

    Appropriate diagnosis and treatment of epilepsy is a main public health issue. Patients suffering from this disease often exhibit different physical characterizations, which result from the synchronous and excessive discharge of a group of neurons in the cerebral cortex. Extracting this information using EEG signals is an important problem in biomedical signal processing. In this work we propose a new algorithm for seizure onset detection and spread estimation in epilepsy patients. The algorithm is based on a multilevel 1-D wavelet decomposition that captures the physiological brain frequency signals coupled with a generalized gaussian model. Preliminary experiments with signals from 30 epilepsy crisis and 11 subjects, suggest that the proposed methodology is a powerful tool for detecting the onset of epilepsy seizures with his spread across the brain.

  6. Automated biodosimetry using digital image analysis of fluorescence in situ hybridization specimens.

    PubMed

    Castleman, K R; Schulze, M; Wu, Q

    1997-11-01

    Fluorescence in situ hybridization (FISH) of metaphase chromosome spreads is valuable for monitoring the radiation dose to circulating lymphocytes. At low dose levels, the number of cells that must be examined to estimate aberration frequencies is quite large. An automated microscope that can perform this analysis autonomously on suitably prepared specimens promises to make practical the large-scale studies that will be required for biodosimetry in the future. This paper describes such an instrument that is currently under development. We use metaphase specimens in which the five largest chromosomes have been hybridized with different-colored whole-chromosome painting probes. An automated multiband fluorescence microscope locates the spreads and counts the number of chromosome components of each color. Digital image analysis is used to locate and isolate the cells, count chromosome components, and estimate the proportions of abnormal cells. Cells exhibiting more than two chromosomal fragments in any color correspond to a clastogenic event. These automatically derived counts are corrected for statistical bias and used to estimate the overall rate of chromosome breakage. Overlap of fluorophore emission spectra prohibits isolation of the different chromosomes into separate color channels. Image processing effectively isolates each fluorophore to a single monochrome image, simplifying the task of counting chromosome fragments and reducing the error in the algorithm. Using proportion estimation, we remove the bias introduced by counting errors, leaving accuracy restricted by sample size considerations alone.

  7. The impact of different algorithms for ideal body weight on screening for hydroxychloroquine retinopathy in women.

    PubMed

    Browning, David J; Lee, Chong; Rotberg, David

    2014-01-01

    To determine how algorithms for ideal body weight (IBW) affect hydroxychloroquine dosing in women. This was a retrospective study of 520 patients screened for hydroxychloroquine retinopathy. Charts were reviewed for sex, height, weight, and daily dose. The outcome measures were ranges of IBW across algorithms; rates of potentially toxic dosing; height thresholds below which 400 mg/d dosing is potentially toxic; and rates for which actual body weight (ABW) was less than IBW. Women made up 474 (91%) of the patients. The IBWs for a height varied from 30-34 pounds (13.6-15.5 kg) across algorithms. The threshold heights below which toxic dosing occurred varied from 62-70 inches (157.5-177.8 cm). Different algorithms placed 16%-98% of women in the toxic dosing range. The proportion for whom dosing should have been based on ABW rather than IBW ranged from 5%-31% across algorithms. Although hydroxychloroquine dosing should be based on the lesser of ABW and IBW, there is no consensus about the definition of IBW. The Michaelides algorithm is associated with the most frequent need to adjust dosing; the Metropolitan Life Insurance, large frame, mean value table with the least frequent need. No evidence indicates that one algorithm is superior to others.

  8. SU-F-T-352: Development of a Knowledge Based Automatic Lung IMRT Planning Algorithm with Non-Coplanar Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, W; Wu, Q; Yuan, L

    Purpose: To improve the robustness of a knowledge based automatic lung IMRT planning method and to further validate the reliability of this algorithm by utilizing for the planning of clinical cases with non-coplanar beams. Methods: A lung IMRT planning method which automatically determines both plan optimization objectives and beam configurations with non-coplanar beams has been reported previously. A beam efficiency index map is constructed to guide beam angle selection in this algorithm. This index takes into account both the dose contributions from individual beams and the combined effect of multiple beams which is represented by a beam separation score. Wemore » studied the effect of this beam separation score on plan quality and determined the optimal weight for this score.14 clinical plans were re-planned with the knowledge-based algorithm. Significant dosimetric metrics for the PTV and OARs in the automatic plans are compared with those in the clinical plans by the two-sample t-test. In addition, a composite dosimetric quality index was defined to obtain the relationship between the plan quality and the beam separation score. Results: On average, we observed more than 15% reduction on conformity index and homogeneity index for PTV and V{sub 40}, V{sub 60} for heart while an 8% and 3% increase on V{sub 5}, V{sub 20} for lungs, respectively. The variation curve of the composite index as a function of angle spread score shows that 0.6 is the best value for the weight of the beam separation score. Conclusion: Optimal value for beam angle spread score in automatic lung IMRT planning is obtained. With this value, model can result in statistically the “best” achievable plans. This method can potentially improve the quality and planning efficiency for IMRT plans with no-coplanar angles.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pacaci, P; Cebe, M; Mabhouti, H

    Purpose: In this study, dosimetric comparison of field in field (FIF) and intensity modulated radiation therapy (IMRT) techniques used for treatment of whole breast radiotherapy (WBRT) were made. The dosimetric accuracy of treatment planning system (TPS) for Anisotropic Analytical Algorithm (AAA) and Acuros XB (AXB) algorithms in predicting PTV and OAR doses was also investigated. Methods: Two different treatment planning techniques of left-sided breast cancer were generated for rando phantom. FIF and IMRT plans were compared for doses in PTV and OAR volumes including ipsilateral lung, heart, left ascending coronary artery, contralateral lung and the contralateral breast. PTV and OARsmore » doses and homogeneity and conformality indexes were compared between two techniques. The accuracy of TPS dose calculation algorithms was tested by comparing PTV and OAR doses measured by thermoluminescent dosimetry with the dose calculated by the TPS using AAA and AXB for both techniques. Results: IMRT plans had better conformality and homogeneity indexes than FIF technique and it spared OARs better than FIF. While both algorithms overestimated PTV doses they underestimated all OAR doses. For IMRT plan, PTV doses, overestimation up to 2.5 % was seen with AAA algorithm but it decreased to 1.8 % when AXB algorithm was used. Based on the results of the anthropomorphic measurements for OAR doses, underestimation greater than 7 % is possible by the AAA. The results from the AXB are much better than the AAA algorithm. However, underestimations of 4.8 % were found in some of the points even for AXB. For FIF plan, similar trend was seen for PTV and OARs doses in both algorithm. Conclusion: When using the Eclipse TPS for breast cancer, AXB the should be used instead of the AAA algorithm, bearing in mind that the AXB may still underestimate all OAR doses.« less

  10. Evaluation of 16 genotype-guided Warfarin Dosing Algorithms in 310 Korean Patients Receiving Warfarin Treatment: Poor Prediction Performance in VKORC1 1173C Carriers.

    PubMed

    Yang, Mina; Choi, Rihwa; Kim, June Soo; On, Young Keun; Bang, Oh Young; Cho, Hyun-Jung; Lee, Soo-Youn

    2016-12-01

    The purpose of this study was to evaluate the performance of 16 previously published warfarin dosing algorithms in Korean patients. The 16 algorithms were selected through a literature search and evaluated using a cohort of 310 Korean patients with atrial fibrillation or cerebral infarction who were receiving warfarin therapy. A large interindividual variation (up to 11-fold) in warfarin dose was observed (median, 25 mg/wk; range, 7-77 mg/wk). Estimated dose and actual maintenance dose correlated well overall (r range, 0.52-0.73). Mean absolute error (MAE) of the 16 algorithms ranged from -1.2 to -20.1 mg/wk. The percentage of patients whose estimated dose fell within 20% of the actual dose ranged from 1.0% to 49%. All algorithms showed poor accuracy with increased MAE in a higher dose range. Performance of the dosing algorithms was worse in patients with VKORC1 1173TC or CC than in total (r range, 0.38-0.61 vs 0.52-0.73; MAE range, -2.6 to -28.0 mg/wk vs -1.2 to -20.1 mg/wk). The algorithms had comparable prediction abilities but showed limited accuracy depending on ethnicity, warfarin dose, and VKORC1 genotype. Further studies are needed to develop genotype-guided warfarin dosing algorithms with greater accuracy in the Korean population. Copyright © 2016 Elsevier HS Journals, Inc. All rights reserved.

  11. SU-E-T-91: Accuracy of Dose Calculation Algorithms for Patients Undergoing Stereotactic Ablative Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tajaldeen, A; Ramachandran, P; Geso, M

    2015-06-15

    Purpose: The purpose of this study was to investigate and quantify the variation in dose distributions in small field lung cancer radiotherapy using seven different dose calculation algorithms. Methods: The study was performed in 21 lung cancer patients who underwent Stereotactic Ablative Body Radiotherapy (SABR). Two different methods (i) Same dose coverage to the target volume (named as same dose method) (ii) Same monitor units in all algorithms (named as same monitor units) were used for studying the performance of seven different dose calculation algorithms in XiO and Eclipse treatment planning systems. The seven dose calculation algorithms include Superposition, Fastmore » superposition, Fast Fourier Transform ( FFT) Convolution, Clarkson, Anisotropic Analytic Algorithm (AAA), Acurous XB and pencil beam (PB) algorithms. Prior to this, a phantom study was performed to assess the accuracy of these algorithms. Superposition algorithm was used as a reference algorithm in this study. The treatment plans were compared using different dosimetric parameters including conformity, heterogeneity and dose fall off index. In addition to this, the dose to critical structures like lungs, heart, oesophagus and spinal cord were also studied. Statistical analysis was performed using Prism software. Results: The mean±stdev with conformity index for Superposition, Fast superposition, Clarkson and FFT convolution algorithms were 1.29±0.13, 1.31±0.16, 2.2±0.7 and 2.17±0.59 respectively whereas for AAA, pencil beam and Acurous XB were 1.4±0.27, 1.66±0.27 and 1.35±0.24 respectively. Conclusion: Our study showed significant variations among the seven different algorithms. Superposition and AcurosXB algorithms showed similar values for most of the dosimetric parameters. Clarkson, FFT convolution and pencil beam algorithms showed large differences as compared to superposition algorithms. Based on our study, we recommend Superposition and AcurosXB algorithms as the first choice of algorithms in lung cancer radiotherapy involving small fields. However, further investigation by Monte Carlo simulation is required to confirm our results.« less

  12. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm.

    PubMed

    Schmidt, Taly Gilat; Wang, Adam S; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-10-01

    The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was [Formula: see text], with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors.

  13. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm

    PubMed Central

    Schmidt, Taly Gilat; Wang, Adam S.; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-01-01

    Abstract. The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was −7%, with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors. PMID:27921070

  14. Evaluation of an iterative model-based CT reconstruction algorithm by intra-patient comparison of standard and ultra-low-dose examinations.

    PubMed

    Noël, Peter B; Engels, Stephan; Köhler, Thomas; Muenzel, Daniela; Franz, Daniela; Rasper, Michael; Rummeny, Ernst J; Dobritz, Martin; Fingerle, Alexander A

    2018-01-01

    Background The explosive growth of computer tomography (CT) has led to a growing public health concern about patient and population radiation dose. A recently introduced technique for dose reduction, which can be combined with tube-current modulation, over-beam reduction, and organ-specific dose reduction, is iterative reconstruction (IR). Purpose To evaluate the quality, at different radiation dose levels, of three reconstruction algorithms for diagnostics of patients with proven liver metastases under tumor follow-up. Material and Methods A total of 40 thorax-abdomen-pelvis CT examinations acquired from 20 patients in a tumor follow-up were included. All patients were imaged using the standard-dose and a specific low-dose CT protocol. Reconstructed slices were generated by using three different reconstruction algorithms: a classical filtered back projection (FBP); a first-generation iterative noise-reduction algorithm (iDose4); and a next generation model-based IR algorithm (IMR). Results The overall detection of liver lesions tended to be higher with the IMR algorithm than with FBP or iDose4. The IMR dataset at standard dose yielded the highest overall detectability, while the low-dose FBP dataset showed the lowest detectability. For the low-dose protocols, a significantly improved detectability of the liver lesion can be reported compared to FBP or iDose 4 ( P = 0.01). The radiation dose decreased by an approximate factor of 5 between the standard-dose and the low-dose protocol. Conclusion The latest generation of IR algorithms significantly improved the diagnostic image quality and provided virtually noise-free images for ultra-low-dose CT imaging.

  15. A clinical study of lung cancer dose calculation accuracy with Monte Carlo simulation.

    PubMed

    Zhao, Yanqun; Qi, Guohai; Yin, Gang; Wang, Xianliang; Wang, Pei; Li, Jian; Xiao, Mingyong; Li, Jie; Kang, Shengwei; Liao, Xiongfei

    2014-12-16

    The accuracy of dose calculation is crucial to the quality of treatment planning and, consequently, to the dose delivered to patients undergoing radiation therapy. Current general calculation algorithms such as Pencil Beam Convolution (PBC) and Collapsed Cone Convolution (CCC) have shortcomings in regard to severe inhomogeneities, particularly in those regions where charged particle equilibrium does not hold. The aim of this study was to evaluate the accuracy of the PBC and CCC algorithms in lung cancer radiotherapy using Monte Carlo (MC) technology. Four treatment plans were designed using Oncentra Masterplan TPS for each patient. Two intensity-modulated radiation therapy (IMRT) plans were developed using the PBC and CCC algorithms, and two three-dimensional conformal therapy (3DCRT) plans were developed using the PBC and CCC algorithms. The DICOM-RT files of the treatment plans were exported to the Monte Carlo system to recalculate. The dose distributions of GTV, PTV and ipsilateral lung calculated by the TPS and MC were compared. For 3DCRT and IMRT plans, the mean dose differences for GTV between the CCC and MC increased with decreasing of the GTV volume. For IMRT, the mean dose differences were found to be higher than that of 3DCRT. The CCC algorithm overestimated the GTV mean dose by approximately 3% for IMRT. For 3DCRT plans, when the volume of the GTV was greater than 100 cm(3), the mean doses calculated by CCC and MC almost have no difference. PBC shows large deviations from the MC algorithm. For the dose to the ipsilateral lung, the CCC algorithm overestimated the dose to the entire lung, and the PBC algorithm overestimated V20 but underestimated V5; the difference in V10 was not statistically significant. PBC substantially overestimates the dose to the tumour, but the CCC is similar to the MC simulation. It is recommended that the treatment plans for lung cancer be developed using an advanced dose calculation algorithm other than PBC. MC can accurately calculate the dose distribution in lung cancer and can provide a notably effective tool for benchmarking the performance of other dose calculation algorithms within patients.

  16. Towards predictive data-driven simulations of wildfire spread - Part I: Reduced-cost Ensemble Kalman Filter based on a Polynomial Chaos surrogate model for parameter estimation

    NASA Astrophysics Data System (ADS)

    Rochoux, M. C.; Ricci, S.; Lucor, D.; Cuenot, B.; Trouvé, A.

    2014-05-01

    This paper is the first part in a series of two articles and presents a data-driven wildfire simulator for forecasting wildfire spread scenarios, at a reduced computational cost that is consistent with operational systems. The prototype simulator features the following components: a level-set-based fire propagation solver FIREFLY that adopts a regional-scale modeling viewpoint, treats wildfires as surface propagating fronts, and uses a description of the local rate of fire spread (ROS) as a function of environmental conditions based on Rothermel's model; a series of airborne-like observations of the fire front positions; and a data assimilation algorithm based on an ensemble Kalman filter (EnKF) for parameter estimation. This stochastic algorithm partly accounts for the non-linearities between the input parameters of the semi-empirical ROS model and the fire front position, and is sequentially applied to provide a spatially-uniform correction to wind and biomass fuel parameters as observations become available. A wildfire spread simulator combined with an ensemble-based data assimilation algorithm is therefore a promising approach to reduce uncertainties in the forecast position of the fire front and to introduce a paradigm-shift in the wildfire emergency response. In order to reduce the computational cost of the EnKF algorithm, a surrogate model based on a polynomial chaos (PC) expansion is used in place of the forward model FIREFLY in the resulting hybrid PC-EnKF algorithm. The performance of EnKF and PC-EnKF is assessed on synthetically-generated simple configurations of fire spread to provide valuable information and insight on the benefits of the PC-EnKF approach as well as on a controlled grassland fire experiment. The results indicate that the proposed PC-EnKF algorithm features similar performance to the standard EnKF algorithm, but at a much reduced computational cost. In particular, the re-analysis and forecast skills of data assimilation strongly relate to the spatial and temporal variability of the errors in the ROS model parameters.

  17. Towards predictive data-driven simulations of wildfire spread - Part I: Reduced-cost Ensemble Kalman Filter based on a Polynomial Chaos surrogate model for parameter estimation

    NASA Astrophysics Data System (ADS)

    Rochoux, M. C.; Ricci, S.; Lucor, D.; Cuenot, B.; Trouvé, A.

    2014-11-01

    This paper is the first part in a series of two articles and presents a data-driven wildfire simulator for forecasting wildfire spread scenarios, at a reduced computational cost that is consistent with operational systems. The prototype simulator features the following components: an Eulerian front propagation solver FIREFLY that adopts a regional-scale modeling viewpoint, treats wildfires as surface propagating fronts, and uses a description of the local rate of fire spread (ROS) as a function of environmental conditions based on Rothermel's model; a series of airborne-like observations of the fire front positions; and a data assimilation (DA) algorithm based on an ensemble Kalman filter (EnKF) for parameter estimation. This stochastic algorithm partly accounts for the nonlinearities between the input parameters of the semi-empirical ROS model and the fire front position, and is sequentially applied to provide a spatially uniform correction to wind and biomass fuel parameters as observations become available. A wildfire spread simulator combined with an ensemble-based DA algorithm is therefore a promising approach to reduce uncertainties in the forecast position of the fire front and to introduce a paradigm-shift in the wildfire emergency response. In order to reduce the computational cost of the EnKF algorithm, a surrogate model based on a polynomial chaos (PC) expansion is used in place of the forward model FIREFLY in the resulting hybrid PC-EnKF algorithm. The performance of EnKF and PC-EnKF is assessed on synthetically generated simple configurations of fire spread to provide valuable information and insight on the benefits of the PC-EnKF approach, as well as on a controlled grassland fire experiment. The results indicate that the proposed PC-EnKF algorithm features similar performance to the standard EnKF algorithm, but at a much reduced computational cost. In particular, the re-analysis and forecast skills of DA strongly relate to the spatial and temporal variability of the errors in the ROS model parameters.

  18. Management of patients infected with airborne-spread diseases: an algorithm for infection control professionals.

    PubMed

    Rebmann, Terri

    2005-12-01

    Many US hospitals lack the capacity to house safely a surge of potentially infectious patients, increasing the risk of secondary transmission. Respiratory protection and negative-pressure rooms are needed to prevent transmission of airborne-spread diseases, but US hospitals lack available and/or properly functioning negative-pressure rooms. Creating new rooms or retrofitting existing facilities is time-consuming and expensive. Safe methods of managing patients with airborne-spread diseases and establishing temporary negative-pressure and/or protective environments were determined by a literature review. Relevant data were analyzed and synthesized to generate a response algorithm. Ideal patient management and placement guidelines, including instructions for choosing respiratory protection and creating temporary negative-pressure or other protective environments, were delineated. Findings were summarized in a treatment algorithm. The threat of bioterrorism and emerging infections increases health care's need for negative-pressure and/or protective environments. The algorithm outlines appropriate response steps to decrease transmission risk until an ideal protective environment can be utilized. Using this algorithm will prepare infection control professionals to respond more effectively during a surge of potentially infectious patients following a bioterrorism attack or emerging infectious disease outbreak.

  19. SU-F-P-56: On a New Approach to Reconstruct the Patient Dose From Phantom Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bangtsson, E; Vries, W de

    Purpose: The development of complex radiation treatment schemes emphasizes the need for advanced QA analysis methods to ensure patient safety. One such tool is the Delta4 DVH Anatomy software, where the patient dose is reconstructed from phantom measurements. Deviations in the measured dose are transferred to the patient anatomy and their clinical impact is evaluated in situ. Results from the original algorithm revealed weaknesses that may introduce artefacts in the reconstructed dose. These can lead to false negatives or obscure the effects of minor dose deviations from delivery failures. Here, we will present results from a new patient dose reconstructionmore » algorithm. Methods: The main steps of the new algorithm are: (1) the dose delivered to a phantom is measured in a number of detector positions. (2) The measured dose is compared to an internally calculated dose distribution evaluated in said positions. The so-obtained dose difference is (3) used to calculate an energy fluence difference. This entity is (4) used as input to a patient dose correction calculation routine. Finally, the patient dose is reconstructed by adding said patient dose correction to the planned patient dose. The internal dose calculation in step (2) and (4) is based on the Pencil Beam algorithm. Results: The new patient dose reconstruction algorithm have been tested on a number of patients and the standard metrics dose deviation (DDev), distance-to-agreement (DTA) and Gamma index are improved when compared to the original algorithm. In a certain case the Gamma index (3%/3mm) increases from 72.9% to 96.6%. Conclusion: The patient dose reconstruction algorithm is improved. This leads to a reduction in non-physical artefacts in the reconstructed patient dose. As a consequence, the possibility to detect deviations in the dose that is delivered to the patient is improved. An increase in Gamma index for the PTV can be seen. The corresponding author is an employee of ScandiDos.« less

  20. Influence of dose calculation algorithms on the predicted dose distribution and NTCP values for NSCLC patients.

    PubMed

    Nielsen, Tine B; Wieslander, Elinore; Fogliata, Antonella; Nielsen, Morten; Hansen, Olfred; Brink, Carsten

    2011-05-01

    To investigate differences in calculated doses and normal tissue complication probability (NTCP) values between different dose algorithms. Six dose algorithms from four different treatment planning systems were investigated: Eclipse AAA, Oncentra MasterPlan Collapsed Cone and Pencil Beam, Pinnacle Collapsed Cone and XiO Multigrid Superposition, and Fast Fourier Transform Convolution. Twenty NSCLC patients treated in the period 2001-2006 at the same accelerator were included and the accelerator used for treatments were modeled in the different systems. The treatment plans were recalculated with the same number of monitor units and beam arrangements across the dose algorithms. Dose volume histograms of the GTV, PTV, combined lungs (excluding the GTV), and heart were exported and evaluated. NTCP values for heart and lungs were calculated using the relative seriality model and the LKB model, respectively. Furthermore, NTCP for the lungs were calculated from two different model parameter sets. Calculations and evaluations were performed both including and excluding density corrections. There are found statistical significant differences between the calculated dose to heart, lung, and targets across the algorithms. Mean lung dose and V20 are not very sensitive to change between the investigated dose calculation algorithms. However, the different dose levels for the PTV averaged over the patient population are varying up to 11%. The predicted NTCP values for pneumonitis vary between 0.20 and 0.24 or 0.35 and 0.48 across the investigated dose algorithms depending on the chosen model parameter set. The influence of the use of density correction in the dose calculation on the predicted NTCP values depends on the specific dose calculation algorithm and the model parameter set. For fixed values of these, the changes in NTCP can be up to 45%. Calculated NTCP values for pneumonitis are more sensitive to the choice of algorithm than mean lung dose and V20 which are also commonly used for plan evaluation. The NTCP values for heart complication are, in this study, not very sensitive to the choice of algorithm. Dose calculations based on density corrections result in quite different NTCP values than calculations without density corrections. It is therefore important when working with NTCP planning to use NTCP parameter values based on calculations and treatments similar to those for which the NTCP is of interest.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vikraman, S; Ramu, M; Karrthick, Kp

    Purpose: The purpose of this study was to validate the advent of COMPASS 3D dosimetry as a routine pre treatment verification tool with commercially available CMS Monaco and Oncentra Masterplan planning system. Methods: Twenty esophagus patients were selected for this study. All these patients underwent radical VMAT treatment in Elekta Linac and plans were generated in Monaco v5.0 with MonteCarlo(MC) dose calculation algorithm. COMPASS 3D dosimetry comprises an advanced dose calculation algorithm of collapsed cone convolution(CCC). To validate CCC algorithm in COMPASS, The DICOM RT Plans generated using Monaco MC algorithm were transferred to Oncentra Masterplan v4.3 TPS. Only finalmore » dose calculations were performed using CCC algorithm with out optimization in Masterplan planning system. It is proven that MC algorithm is an accurate algorithm and obvious that there will be a difference with MC and CCC algorithms. Hence CCC in COMPASS should be validated with other commercially available CCC algorithm. To use the CCC as pretreatment verification tool with reference to MC generated treatment plans, CCC in OMP and CCC in COMPASS were validated using dose volume based indices such as D98, D95 for target volumes and OAR doses. Results: The point doses for open beams were observed <1% with reference to Monaco MC algorithms. Comparisons of CCC(OMP) Vs CCC(COMPASS) showed a mean difference of 1.82%±1.12SD and 1.65%±0.67SD for D98 and D95 respectively for Target coverage. Maximum point dose of −2.15%±0.60SD difference was observed in target volume. The mean lung dose of −2.68%±1.67SD was noticed between OMP and COMPASS. The maximum point doses for spinal cord were −1.82%±0.287SD. Conclusion: In this study, the accuracy of CCC algorithm in COMPASS 3D dosimetry was validated by compared with CCC algorithm in OMP TPS. Dose calculation in COMPASS is feasible within < 2% in comparison with commercially available TPS algorithms.« less

  2. Dosimetric accuracy of a treatment planning system for actively scanned proton beams and small target volumes: Monte Carlo and experimental validation

    NASA Astrophysics Data System (ADS)

    Magro, G.; Molinelli, S.; Mairani, A.; Mirandola, A.; Panizza, D.; Russo, S.; Ferrari, A.; Valvo, F.; Fossati, P.; Ciocca, M.

    2015-09-01

    This study was performed to evaluate the accuracy of a commercial treatment planning system (TPS), in optimising proton pencil beam dose distributions for small targets of different sizes (5-30 mm side) located at increasing depths in water. The TPS analytical algorithm was benchmarked against experimental data and the FLUKA Monte Carlo (MC) code, previously validated for the selected beam-line. We tested the Siemens syngo® TPS plan optimisation module for water cubes fixing the configurable parameters at clinical standards, with homogeneous target coverage to a 2 Gy (RBE) dose prescription as unique goal. Plans were delivered and the dose at each volume centre was measured in water with a calibrated PTW Advanced Markus® chamber. An EBT3® film was also positioned at the phantom entrance window for the acquisition of 2D dose maps. Discrepancies between TPS calculated and MC simulated values were mainly due to the different lateral spread modeling and resulted in being related to the field-to-spot size ratio. The accuracy of the TPS was proved to be clinically acceptable in all cases but very small and shallow volumes. In this contest, the use of MC to validate TPS results proved to be a reliable procedure for pre-treatment plan verification.

  3. Dosimetric accuracy of a treatment planning system for actively scanned proton beams and small target volumes: Monte Carlo and experimental validation.

    PubMed

    Magro, G; Molinelli, S; Mairani, A; Mirandola, A; Panizza, D; Russo, S; Ferrari, A; Valvo, F; Fossati, P; Ciocca, M

    2015-09-07

    This study was performed to evaluate the accuracy of a commercial treatment planning system (TPS), in optimising proton pencil beam dose distributions for small targets of different sizes (5-30 mm side) located at increasing depths in water. The TPS analytical algorithm was benchmarked against experimental data and the FLUKA Monte Carlo (MC) code, previously validated for the selected beam-line. We tested the Siemens syngo(®) TPS plan optimisation module for water cubes fixing the configurable parameters at clinical standards, with homogeneous target coverage to a 2 Gy (RBE) dose prescription as unique goal. Plans were delivered and the dose at each volume centre was measured in water with a calibrated PTW Advanced Markus(®) chamber. An EBT3(®) film was also positioned at the phantom entrance window for the acquisition of 2D dose maps. Discrepancies between TPS calculated and MC simulated values were mainly due to the different lateral spread modeling and resulted in being related to the field-to-spot size ratio. The accuracy of the TPS was proved to be clinically acceptable in all cases but very small and shallow volumes. In this contest, the use of MC to validate TPS results proved to be a reliable procedure for pre-treatment plan verification.

  4. Verification of Pharmacogenetics-Based Warfarin Dosing Algorithms in Han-Chinese Patients Undertaking Mechanic Heart Valve Replacement

    PubMed Central

    Zhao, Li; Chen, Chunxia; Li, Bei; Dong, Li; Guo, Yingqiang; Xiao, Xijun; Zhang, Eryong; Qin, Li

    2014-01-01

    Objective To study the performance of pharmacogenetics-based warfarin dosing algorithms in the initial and the stable warfarin treatment phases in a cohort of Han-Chinese patients undertaking mechanic heart valve replacement. Methods We searched PubMed, Chinese National Knowledge Infrastructure and Wanfang databases for selecting pharmacogenetics-based warfarin dosing models. Patients with mechanic heart valve replacement were consecutively recruited between March 2012 and July 2012. The predicted warfarin dose of each patient was calculated and compared with the observed initial and stable warfarin doses. The percentage of patients whose predicted dose fell within 20% of their actual therapeutic dose (percentage within 20%), and the mean absolute error (MAE) were utilized to evaluate the predictive accuracy of all the selected algorithms. Results A total of 8 algorithms including Du, Huang, Miao, Wei, Zhang, Lou, Gage, and International Warfarin Pharmacogenetics Consortium (IWPC) model, were tested in 181 patients. The MAE of the Gage, IWPC and 6 Han-Chinese pharmacogenetics-based warfarin dosing algorithms was less than 0.6 mg/day in accuracy and the percentage within 20% exceeded 45% in all of the selected models in both the initial and the stable treatment stages. When patients were stratified according to the warfarin dose range, all of the equations demonstrated better performance in the ideal-dose range (1.88–4.38 mg/day) than the low-dose range (<1.88 mg/day). Among the 8 algorithms compared, the algorithms of Wei, Huang, and Miao showed a lower MAE and higher percentage within 20% in both the initial and the stable warfarin dose prediction and in the low-dose and the ideal-dose ranges. Conclusions All of the selected pharmacogenetics-based warfarin dosing regimens performed similarly in our cohort. However, the algorithms of Wei, Huang, and Miao showed a better potential for warfarin prediction in the initial and the stable treatment phases in Han-Chinese patients undertaking mechanic heart valve replacement. PMID:24728385

  5. Verification of pharmacogenetics-based warfarin dosing algorithms in Han-Chinese patients undertaking mechanic heart valve replacement.

    PubMed

    Zhao, Li; Chen, Chunxia; Li, Bei; Dong, Li; Guo, Yingqiang; Xiao, Xijun; Zhang, Eryong; Qin, Li

    2014-01-01

    To study the performance of pharmacogenetics-based warfarin dosing algorithms in the initial and the stable warfarin treatment phases in a cohort of Han-Chinese patients undertaking mechanic heart valve replacement. We searched PubMed, Chinese National Knowledge Infrastructure and Wanfang databases for selecting pharmacogenetics-based warfarin dosing models. Patients with mechanic heart valve replacement were consecutively recruited between March 2012 and July 2012. The predicted warfarin dose of each patient was calculated and compared with the observed initial and stable warfarin doses. The percentage of patients whose predicted dose fell within 20% of their actual therapeutic dose (percentage within 20%), and the mean absolute error (MAE) were utilized to evaluate the predictive accuracy of all the selected algorithms. A total of 8 algorithms including Du, Huang, Miao, Wei, Zhang, Lou, Gage, and International Warfarin Pharmacogenetics Consortium (IWPC) model, were tested in 181 patients. The MAE of the Gage, IWPC and 6 Han-Chinese pharmacogenetics-based warfarin dosing algorithms was less than 0.6 mg/day in accuracy and the percentage within 20% exceeded 45% in all of the selected models in both the initial and the stable treatment stages. When patients were stratified according to the warfarin dose range, all of the equations demonstrated better performance in the ideal-dose range (1.88-4.38 mg/day) than the low-dose range (<1.88 mg/day). Among the 8 algorithms compared, the algorithms of Wei, Huang, and Miao showed a lower MAE and higher percentage within 20% in both the initial and the stable warfarin dose prediction and in the low-dose and the ideal-dose ranges. All of the selected pharmacogenetics-based warfarin dosing regimens performed similarly in our cohort. However, the algorithms of Wei, Huang, and Miao showed a better potential for warfarin prediction in the initial and the stable treatment phases in Han-Chinese patients undertaking mechanic heart valve replacement.

  6. Low-dose interleukin-2 as a modulator of Treg homeostasis after HSCT: current understanding and future perspectives.

    PubMed

    Matsuoka, Ken-Ichi

    2018-02-01

    CD4 + CD25 + Foxp3 + Treg is a functionally distinct subset of mature T cells with broad suppressive activity and has been shown to play an important role in the establishment of immune tolerance after HSCT. Altered cytokine environment in post-HSCT lymphopenia with a relative functional deficiency of IL-2 could hamper the reconstitution of Treg, leading to refractory GVHD. Based on the theory of low-dose IL-2 in which Treg can be selectively stimulated through the high-affinity IL-2 receptor, clinical studies have been conducted and demonstrated that low-dose IL-2 administration can restore Treg homeostasis and promote the expansion of this subset on the polymorphic processes of Treg reconstitution after HSCT. The new therapeutic indication of IL-2 for immune tolerance has launched in the field of HSCT and is spreading to the other fields including the treatment for autoimmune diseases. To further extend the indication of low-dose IL-2 to more patients with various immunological problems, the optimization of the timing and dosing of IL-2 intervention and the concomitant immune suppressive therapy according to each patient-based assessment are to be desired in the near future. Further prospective studies may facilitate the development of novel therapeutic algorithms for the effective and safe induction of immune tolerance after HSCT.

  7. Evaluation of hybrid inverse planning and optimization (HIPO) algorithm for optimization in real-time, high-dose-rate (HDR) brachytherapy for prostate.

    PubMed

    Pokharel, Shyam; Rana, Suresh; Blikenstaff, Joseph; Sadeghi, Amir; Prestidge, Bradley

    2013-07-08

    The purpose of this study is to investigate the effectiveness of the HIPO planning and optimization algorithm for real-time prostate HDR brachytherapy. This study consists of 20 patients who underwent ultrasound-based real-time HDR brachytherapy of the prostate using the treatment planning system called Oncentra Prostate (SWIFT version 3.0). The treatment plans for all patients were optimized using inverse dose-volume histogram-based optimization followed by graphical optimization (GRO) in real time. The GRO is manual manipulation of isodose lines slice by slice. The quality of the plan heavily depends on planner expertise and experience. The data for all patients were retrieved later, and treatment plans were created and optimized using HIPO algorithm with the same set of dose constraints, number of catheters, and set of contours as in the real-time optimization algorithm. The HIPO algorithm is a hybrid because it combines both stochastic and deterministic algorithms. The stochastic algorithm, called simulated annealing, searches the optimal catheter distributions for a given set of dose objectives. The deterministic algorithm, called dose-volume histogram-based optimization (DVHO), optimizes three-dimensional dose distribution quickly by moving straight downhill once it is in the advantageous region of the search space given by the stochastic algorithm. The PTV receiving 100% of the prescription dose (V100) was 97.56% and 95.38% with GRO and HIPO, respectively. The mean dose (D(mean)) and minimum dose to 10% volume (D10) for the urethra, rectum, and bladder were all statistically lower with HIPO compared to GRO using the student pair t-test at 5% significance level. HIPO can provide treatment plans with comparable target coverage to that of GRO with a reduction in dose to the critical structures.

  8. High speed stereovision setup for position and motion estimation of fertilizer particles leaving a centrifugal spreader.

    PubMed

    Hijazi, Bilal; Cool, Simon; Vangeyte, Jürgen; Mertens, Koen C; Cointault, Frédéric; Paindavoine, Michel; Pieters, Jan G

    2014-11-13

    A 3D imaging technique using a high speed binocular stereovision system was developed in combination with corresponding image processing algorithms for accurate determination of the parameters of particles leaving the spinning disks of centrifugal fertilizer spreaders. Validation of the stereo-matching algorithm using a virtual 3D stereovision simulator indicated an error of less than 2 pixels for 90% of the particles. The setup was validated using the cylindrical spread pattern of an experimental spreader. A 2D correlation coefficient of 90% and a Relative Error of 27% was found between the experimental results and the (simulated) spread pattern obtained with the developed setup. In combination with a ballistic flight model, the developed image acquisition and processing algorithms can enable fast determination and evaluation of the spread pattern which can be used as a tool for spreader design and precise machine calibration.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iwai, P; Lins, L Nadler

    Purpose: There is a lack of studies with significant cohort data about patients using pacemaker (PM), implanted cardioverter defibrillator (ICD) or cardiac resynchronization therapy (CRT) device undergoing radiotherapy. There is no literature comparing the cumulative doses delivered to those cardiac implanted electronic devices (CIED) calculated by different algorithms neither studies comparing doses with heterogeneity correction or not. The aim of this study was to evaluate the influence of the algorithms Pencil Beam Convolution (PBC), Analytical Anisotropic Algorithm (AAA) and Acuros XB (AXB) as well as heterogeneity correction on risk categorization of patients. Methods: A retrospective analysis of 19 3DCRT ormore » IMRT plans of 17 patients was conducted, calculating the dose delivered to CIED using three different calculation algorithms. Doses were evaluated with and without heterogeneity correction for comparison. Risk categorization of the patients was based on their CIED dependency and cumulative dose in the devices. Results: Total estimated doses at CIED calculated by AAA or AXB were higher than those calculated by PBC in 56% of the cases. In average, the doses at CIED calculated by AAA and AXB were higher than those calculated by PBC (29% and 4% higher, respectively). The maximum difference of doses calculated by each algorithm was about 1 Gy, either using heterogeneity correction or not. Values of maximum dose calculated with heterogeneity correction showed that dose at CIED was at least equal or higher in 84% of the cases with PBC, 77% with AAA and 67% with AXB than dose obtained with no heterogeneity correction. Conclusion: The dose calculation algorithm and heterogeneity correction did not change the risk categorization. Since higher estimated doses delivered to CIED do not compromise treatment precautions to be taken, it’s recommend that the most sophisticated algorithm available should be used to predict dose at the CIED using heterogeneity correction.« less

  10. A new warfarin dosing algorithm including VKORC1 3730 G > A polymorphism: comparison with results obtained by other published algorithms.

    PubMed

    Cini, Michela; Legnani, Cristina; Cosmi, Benilde; Guazzaloca, Giuliana; Valdrè, Lelia; Frascaro, Mirella; Palareti, Gualtiero

    2012-08-01

    Warfarin dosing is affected by clinical and genetic variants, but the contribution of the genotype associated with warfarin resistance in pharmacogenetic algorithms has not been well assessed yet. We developed a new dosing algorithm including polymorphisms associated both with warfarin sensitivity and resistance in the Italian population, and its performance was compared with those of eight previously published algorithms. Clinical and genetic data (CYP2C9*2, CYP2C9*3, VKORC1 -1639 G > A, and VKORC1 3730 G > A) were used to elaborate the new algorithm. Derivation and validation groups comprised 55 (58.2% men, mean age 69 years) and 40 (57.5% men, mean age 70 years) patients, respectively, who were on stable anticoagulation therapy for at least 3 months with different oral anticoagulation therapy (OAT) indications. Performance of the new algorithm, evaluated with mean absolute error (MAE) defined as the absolute value of the difference between observed daily maintenance dose and predicted daily dose, correlation with the observed dose and R(2) value, was comparable with or slightly lower than that obtained using the other algorithms. The new algorithm could correctly assign 53.3%, 50.0%, and 57.1% of patients to the low (≤25 mg/week), intermediate (26-44 mg/week) and high (≥ 45 mg/week) dosing range, respectively. Our data showed a significant increase in predictive accuracy among patients requiring high warfarin dose compared with the other algorithms (ranging from 0% to 28.6%). The algorithm including VKORC1 3730 G > A, associated with warfarin resistance, allowed a more accurate identification of resistant patients who require higher warfarin dosage.

  11. Hybrid dose calculation: a dose calculation algorithm for microbeam radiation therapy

    NASA Astrophysics Data System (ADS)

    Donzelli, Mattia; Bräuer-Krisch, Elke; Oelfke, Uwe; Wilkens, Jan J.; Bartzsch, Stefan

    2018-02-01

    Microbeam radiation therapy (MRT) is still a preclinical approach in radiation oncology that uses planar micrometre wide beamlets with extremely high peak doses, separated by a few hundred micrometre wide low dose regions. Abundant preclinical evidence demonstrates that MRT spares normal tissue more effectively than conventional radiation therapy, at equivalent tumour control. In order to launch first clinical trials, accurate and efficient dose calculation methods are an inevitable prerequisite. In this work a hybrid dose calculation approach is presented that is based on a combination of Monte Carlo and kernel based dose calculation. In various examples the performance of the algorithm is compared to purely Monte Carlo and purely kernel based dose calculations. The accuracy of the developed algorithm is comparable to conventional pure Monte Carlo calculations. In particular for inhomogeneous materials the hybrid dose calculation algorithm out-performs purely convolution based dose calculation approaches. It is demonstrated that the hybrid algorithm can efficiently calculate even complicated pencil beam and cross firing beam geometries. The required calculation times are substantially lower than for pure Monte Carlo calculations.

  12. Influence of different dose calculation algorithms on the estimate of NTCP for lung complications

    PubMed Central

    Bäck, Anna

    2013-01-01

    Due to limitations and uncertainties in dose calculation algorithms, different algorithms can predict different dose distributions and dose‐volume histograms for the same treatment. This can be a problem when estimating the normal tissue complication probability (NTCP) for patient‐specific dose distributions. Published NTCP model parameters are often derived for a different dose calculation algorithm than the one used to calculate the actual dose distribution. The use of algorithm‐specific NTCP model parameters can prevent errors caused by differences in dose calculation algorithms. The objective of this work was to determine how to change the NTCP model parameters for lung complications derived for a simple correction‐based pencil beam dose calculation algorithm, in order to make them valid for three other common dose calculation algorithms. NTCP was calculated with the relative seriality (RS) and Lyman‐Kutcher‐Burman (LKB) models. The four dose calculation algorithms used were the pencil beam (PB) and collapsed cone (CC) algorithms employed by Oncentra, and the pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) employed by Eclipse. Original model parameters for lung complications were taken from four published studies on different grades of pneumonitis, and new algorithm‐specific NTCP model parameters were determined. The difference between original and new model parameters was presented in relation to the reported model parameter uncertainties. Three different types of treatments were considered in the study: tangential and locoregional breast cancer treatment and lung cancer treatment. Changing the algorithm without the derivation of new model parameters caused changes in the NTCP value of up to 10 percentage points for the cases studied. Furthermore, the error introduced could be of the same magnitude as the confidence intervals of the calculated NTCP values. The new NTCP model parameters were tabulated as the algorithm was varied from PB to PBC, AAA, or CC. Moving from the PB to the PBC algorithm did not require new model parameters; however, moving from PB to AAA or CC did require a change in the NTCP model parameters, with CC requiring the largest change. It was shown that the new model parameters for a given algorithm are different for the different treatment types. PACS numbers: 87.53.‐j, 87.53.Kn, 87.55.‐x, 87.55.dh, 87.55.kd PMID:24036865

  13. Fast and Accurate Detection of Spread Source in Large Complex Networks

    DTIC Science & Technology

    the patient one in epidemics, or source of rumor spreading in social network. Pinto, Thiran and Vetterli introduced an algorithm (PTVA) to solve the...important case of this problem in which a limited set of nodes act as observers and report times at which the spread reached them. PTVA uses all

  14. Evaluation of genotype-guided acenocoumarol dosing algorithms in Russian patients.

    PubMed

    Sychev, Dmitriy Alexeyevich; Rozhkov, Aleksandr Vladimirovich; Ananichuk, Anna Viktorovna; Kazakov, Ruslan Evgenyevich

    2017-05-24

    Acenocoumarol dose is normally determined via step-by-step adjustment process based on International Normalized Ratio (INR) measurements. During this time, the risk of adverse reactions is especially high. Several genotype-based acenocoumarol dosing algorithms have been created to predict ideal doses at the start of anticoagulant therapy. Nine dosing algorithms were selected through a literature search. These were evaluated using a cohort of 63 patients with atrial fibrillation receiving acenocoumarol therapy. None of the existing algorithms could predict the ideal acenocoumarol dose in 50% of Russian patients. The Wolkanin-Bartnik algorithtm based on European population was the best-performing one with the highest correlation values (r=0.397), mean absolute error (MAE) 0.82 (±0.61). EU-PACT also managed to give an estimate within the ideal range in 43% of the cases. The two least accurate results were yielded by the Indian population-based algorithms. Among patients receiving amiodarone, algorithms by Schie and Tong proved to be the most effective with the MAE of 0.48±0.42 mg/day and 0.56±0.31 mg/day, respectively. Patient ethnicity and amiodarone intake are factors that must be considered when building future algorithms. Further research is required to find the perfect dosing formula of acenocoumarol maintenance doses in Russian patients.

  15. Epidemic spreading on complex networks with overlapping and non-overlapping community structure

    NASA Astrophysics Data System (ADS)

    Shang, Jiaxing; Liu, Lianchen; Li, Xin; Xie, Feng; Wu, Cheng

    2015-02-01

    Many real-world networks exhibit community structure where vertices belong to one or more communities. Recent studies show that community structure plays an import role in epidemic spreading. In this paper, we investigate how the extent of overlap among communities affects epidemics. In order to experiment on the characteristic of overlapping communities, we propose a rewiring algorithm that can change the community structure from overlapping to non-overlapping while maintaining the degree distribution of the network. We simulate the Susceptible-Infected-Susceptible (SIS) epidemic process on synthetic scale-free networks and real-world networks by applying our rewiring algorithm. Experiments show that epidemics spread faster on networks with higher level of overlapping communities. Furthermore, overlapping communities' effect interacts with the average degree's effect. Our work further illustrates the important role of overlapping communities in the process of epidemic spreading.

  16. Information spread of emergency events: path searching on social networks.

    PubMed

    Dai, Weihui; Hu, Hongzhi; Wu, Tunan; Dai, Yonghui

    2014-01-01

    Emergency has attracted global attentions of government and the public, and it will easily trigger a series of serious social problems if it is not supervised effectively in the dissemination process. In the Internet world, people communicate with each other and form various virtual communities based on social networks, which lead to a complex and fast information spread pattern of emergency events. This paper collects Internet data based on data acquisition and topic detection technology, analyzes the process of information spread on social networks, describes the diffusions and impacts of that information from the perspective of random graph, and finally seeks the key paths through an improved IBF algorithm. Application cases have shown that this algorithm can search the shortest spread paths efficiently, which may help us to guide and control the information dissemination of emergency events on early warning.

  17. Dosing algorithm to target a predefined AUC in patients with primary central nervous system lymphoma receiving high dose methotrexate.

    PubMed

    Joerger, Markus; Ferreri, Andrés J M; Krähenbühl, Stephan; Schellens, Jan H M; Cerny, Thomas; Zucca, Emanuele; Huitema, Alwin D R

    2012-02-01

    There is no consensus regarding optimal dosing of high dose methotrexate (HDMTX) in patients with primary CNS lymphoma. Our aim was to develop a convenient dosing algorithm to target AUC(MTX) in the range between 1000 and 1100 µmol l(-1) h. A population covariate model from a pooled dataset of 131 patients receiving HDMTX was used to simulate concentration-time curves of 10,000 patients and test the efficacy of a dosing algorithm based on 24 h MTX plasma concentrations to target the prespecified AUC(MTX) . These data simulations included interindividual, interoccasion and residual unidentified variability. Patients received a total of four simulated cycles of HDMTX and adjusted MTX dosages were given for cycles two to four. The dosing algorithm proposes MTX dose adaptations ranging from +75% in patients with MTX C(24) < 0.5 µmol l(-1) up to -35% in patients with MTX C(24) > 12 µmol l(-1). The proposed dosing algorithm resulted in a marked improvement of the proportion of patients within the AUC(MTX) target between 1000 and 1100 µmol l(-1) h (11% with standard MTX dose, 35% with the adjusted dose) and a marked reduction of the interindividual variability of MTX exposure. A simple and practical dosing algorithm for HDMTX has been developed based on MTX 24 h plasma concentrations, and its potential efficacy in improving the proportion of patients within a prespecified target AUC(MTX) and reducing the interindividual variability of MTX exposure has been shown by data simulations. The clinical benefit of this dosing algorithm should be assessed in patients with primary central nervous system lymphoma (PCNSL). © 2011 The Authors. British Journal of Clinical Pharmacology © 2011 The British Pharmacological Society.

  18. Comparison of selected dose calculation algorithms in radiotherapy treatment planning for tissues with inhomogeneities

    NASA Astrophysics Data System (ADS)

    Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.

    2016-03-01

    Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.

  19. Relaxation of Distributed Data Aggregation for Underwater Acoustic Sensor Networks

    DTIC Science & Technology

    2014-03-31

    2 3.1 Gossip algorithms for distributed averaging . . . . . . . . . . . . . . . . . 3 3.2 Distributed particle filtering...algorithm that had direct access to all of the measurements. We use gossip algorithms (discussed in Section 3.1) to diffuse information across the...2 3.1 Gossip algorithms for distributed averaging We begin by discussing gossip algorithms, which we use to synchronize and spread infor- mation

  20. Sparsity constrained split feasibility for dose-volume constraints in inverse planning of intensity-modulated photon or proton therapy

    NASA Astrophysics Data System (ADS)

    Penfold, Scott; Zalas, Rafał; Casiraghi, Margherita; Brooke, Mark; Censor, Yair; Schulte, Reinhard

    2017-05-01

    A split feasibility formulation for the inverse problem of intensity-modulated radiation therapy treatment planning with dose-volume constraints included in the planning algorithm is presented. It involves a new type of sparsity constraint that enables the inclusion of a percentage-violation constraint in the model problem and its handling by continuous (as opposed to integer) methods. We propose an iterative algorithmic framework for solving such a problem by applying the feasibility-seeking CQ-algorithm of Byrne combined with the automatic relaxation method that uses cyclic projections. Detailed implementation instructions are furnished. Functionality of the algorithm was demonstrated through the creation of an intensity-modulated proton therapy plan for a simple 2D C-shaped geometry and also for a realistic base-of-skull chordoma treatment site. Monte Carlo simulations of proton pencil beams of varying energy were conducted to obtain dose distributions for the 2D test case. A research release of the Pinnacle 3 proton treatment planning system was used to extract pencil beam doses for a clinical base-of-skull chordoma case. In both cases the beamlet doses were calculated to satisfy dose-volume constraints according to our new algorithm. Examination of the dose-volume histograms following inverse planning with our algorithm demonstrated that it performed as intended. The application of our proposed algorithm to dose-volume constraint inverse planning was successfully demonstrated. Comparison with optimized dose distributions from the research release of the Pinnacle 3 treatment planning system showed the algorithm could achieve equivalent or superior results.

  1. Pharmacogenetics-based warfarin dosing algorithm decreases time to stable anticoagulation and the risk of major hemorrhage: an updated meta-analysis of randomized controlled trials.

    PubMed

    Wang, Zhi-Quan; Zhang, Rui; Zhang, Peng-Pai; Liu, Xiao-Hong; Sun, Jian; Wang, Jun; Feng, Xiang-Fei; Lu, Qiu-Fen; Li, Yi-Gang

    2015-04-01

    Warfarin is yet the most widely used oral anticoagulant for thromboembolic diseases, despite the recently emerged novel anticoagulants. However, difficulty in maintaining stable dose within the therapeutic range and subsequent serious adverse effects markedly limited its use in clinical practice. Pharmacogenetics-based warfarin dosing algorithm is a recently emerged strategy to predict the initial and maintaining dose of warfarin. However, whether this algorithm is superior over conventional clinically guided dosing algorithm remains controversial. We made a comparison of pharmacogenetics-based versus clinically guided dosing algorithm by an updated meta-analysis. We searched OVID MEDLINE, EMBASE, and the Cochrane Library for relevant citations. The primary outcome was the percentage of time in therapeutic range. The secondary outcomes were time to stable therapeutic dose and the risks of adverse events including all-cause mortality, thromboembolic events, total bleedings, and major bleedings. Eleven randomized controlled trials with 2639 participants were included. Our pooled estimates indicated that pharmacogenetics-based dosing algorithm did not improve percentage of time in therapeutic range [weighted mean difference, 4.26; 95% confidence interval (CI), -0.50 to 9.01; P = 0.08], but it significantly shortened the time to stable therapeutic dose (weighted mean difference, -8.67; 95% CI, -11.86 to -5.49; P < 0.00001). Additionally, pharmacogenetics-based algorithm significantly reduced the risk of major bleedings (odds ratio, 0.48; 95% CI, 0.23 to 0.98; P = 0.04), but it did not reduce the risks of all-cause mortality, total bleedings, or thromboembolic events. Our results suggest that pharmacogenetics-based warfarin dosing algorithm significantly improves the efficiency of International Normalized Ratio correction and reduces the risk of major hemorrhage.

  2. Experimental evaluation of a GPU-based Monte Carlo dose calculation algorithm in the Monaco treatment planning system.

    PubMed

    Paudel, Moti R; Kim, Anthony; Sarfehnia, Arman; Ahmad, Sayed B; Beachey, David J; Sahgal, Arjun; Keller, Brian M

    2016-11-08

    A new GPU-based Monte Carlo dose calculation algorithm (GPUMCD), devel-oped by the vendor Elekta for the Monaco treatment planning system (TPS), is capable of modeling dose for both a standard linear accelerator and an Elekta MRI linear accelerator. We have experimentally evaluated this algorithm for a standard Elekta Agility linear accelerator. A beam model was developed in the Monaco TPS (research version 5.09.06) using the commissioned beam data for a 6 MV Agility linac. A heterogeneous phantom representing several scenarios - tumor-in-lung, lung, and bone-in-tissue - was designed and built. Dose calculations in Monaco were done using both the current clinical Monte Carlo algorithm, XVMC, and the new GPUMCD algorithm. Dose calculations in a Pinnacle TPS were also produced using the collapsed cone convolution (CCC) algorithm with heterogeneity correc-tion. Calculations were compared with the measured doses using an ionization chamber (A1SL) and Gafchromic EBT3 films for 2 × 2 cm2, 5 × 5 cm2, and 10 × 10 cm2 field sizes. The percentage depth doses (PDDs) calculated by XVMC and GPUMCD in a homogeneous solid water phantom were within 2%/2 mm of film measurements and within 1% of ion chamber measurements. For the tumor-in-lung phantom, the calculated doses were within 2.5%/2.5 mm of film measurements for GPUMCD. For the lung phantom, doses calculated by all of the algorithms were within 3%/3 mm of film measurements, except for the 2 × 2 cm2 field size where the CCC algorithm underestimated the depth dose by ~ 5% in a larger extent of the lung region. For the bone phantom, all of the algorithms were equivalent and calculated dose to within 2%/2 mm of film measurements, except at the interfaces. Both GPUMCD and XVMC showed interface effects, which were more pronounced for GPUMCD and were comparable to film measurements, whereas the CCC algorithm showed these effects poorly. © 2016 The Authors.

  3. Monte Carlo calculated microdosimetric spread for cell nucleus-sized targets exposed to brachytherapy 125I and 192Ir sources and 60Co cell irradiation.

    PubMed

    Villegas, Fernanda; Tilly, Nina; Ahnesjö, Anders

    2013-09-07

    The stochastic nature of ionizing radiation interactions causes a microdosimetric spread in energy depositions for cell or cell nucleus-sized volumes. The magnitude of the spread may be a confounding factor in dose response analysis. The aim of this work is to give values for the microdosimetric spread for a range of doses imparted by (125)I and (192)Ir brachytherapy radionuclides, and for a (60)Co source. An upgraded version of the Monte Carlo code PENELOPE was used to obtain frequency distributions of specific energy for each of these radiation qualities and for four different cell nucleus-sized volumes. The results demonstrate that the magnitude of the microdosimetric spread increases when the target size decreases or when the energy of the radiation quality is reduced. Frequency distributions calculated according to the formalism of Kellerer and Chmelevsky using full convolution of the Monte Carlo calculated single track frequency distributions confirm that at doses exceeding 0.08 Gy for (125)I, 0.1 Gy for (192)Ir, and 0.2 Gy for (60)Co, the resulting distribution can be accurately approximated with a normal distribution. A parameterization of the width of the distribution as a function of dose and target volume of interest is presented as a convenient form for the use in response modelling or similar contexts.

  4. The Impact of Monte Carlo Dose Calculations on Intensity-Modulated Radiation Therapy

    NASA Astrophysics Data System (ADS)

    Siebers, J. V.; Keall, P. J.; Mohan, R.

    The effect of dose calculation accuracy for IMRT was studied by comparing different dose calculation algorithms. A head and neck IMRT plan was optimized using a superposition dose calculation algorithm. Dose was re-computed for the optimized plan using both Monte Carlo and pencil beam dose calculation algorithms to generate patient and phantom dose distributions. Tumor control probabilities (TCP) and normal tissue complication probabilities (NTCP) were computed to estimate the plan outcome. For the treatment plan studied, Monte Carlo best reproduces phantom dose measurements, the TCP was slightly lower than the superposition and pencil beam results, and the NTCP values differed little.

  5. Characterization of high-dose and high-energy implanted gate and source diode and analysis of lateral spreading of p gate profile in high voltage SiC static induction transistors

    NASA Astrophysics Data System (ADS)

    Onose, Hidekatsu; Kobayashi, Yutaka; Onuki, Jin

    2017-03-01

    The effect of the p gate dose on the characteristics of the gate-source diode in SiC static induction transistors (SIT) was investigated. It was found that a dose of 1.5 × 1014 cm-2 yields a pn junction breakdown voltage higher than 60 V and good forward characteristics. A normally on SiC SIT was fabricated and demonstrated. A blocking voltage higher than 2.0 kV at a gate-source voltage of -50 V and on-resistance of 70 mΩ cm2 were obtained. Device simulations were performed to investigate the effect of the lateral spreading. By comparing the measured I-V curves with simulation results, the lateral spreading factor was estimated to be about 0.5. The lateral spreading detrimentally affected the electrical properties of the SIT made using implantations at energies higher than 1 MeV.

  6. Research on parallel combinatory spread spectrum communication system with double information matching

    NASA Astrophysics Data System (ADS)

    Xue, Wei; Wang, Qi; Wang, Tianyu

    2018-04-01

    This paper presents an improved parallel combinatory spread spectrum (PC/SS) communication system with the method of double information matching (DIM). Compared with conventional PC/SS system, the new model inherits the advantage of high transmission speed, large information capacity and high security. Besides, the problem traditional system will face is the high bit error rate (BER) and since its data-sequence mapping algorithm. Hence the new model presented shows lower BER and higher efficiency by its optimization of mapping algorithm.

  7. Information Spread of Emergency Events: Path Searching on Social Networks

    PubMed Central

    Hu, Hongzhi; Wu, Tunan

    2014-01-01

    Emergency has attracted global attentions of government and the public, and it will easily trigger a series of serious social problems if it is not supervised effectively in the dissemination process. In the Internet world, people communicate with each other and form various virtual communities based on social networks, which lead to a complex and fast information spread pattern of emergency events. This paper collects Internet data based on data acquisition and topic detection technology, analyzes the process of information spread on social networks, describes the diffusions and impacts of that information from the perspective of random graph, and finally seeks the key paths through an improved IBF algorithm. Application cases have shown that this algorithm can search the shortest spread paths efficiently, which may help us to guide and control the information dissemination of emergency events on early warning. PMID:24600323

  8. Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-03-01

    The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.

  9. Effect of stratification and geometrical spreading on sonic boom rise time

    NASA Technical Reports Server (NTRS)

    Cleveland, Robin O.; Hamilton, Mark F.; Blackstock, David T.

    1994-01-01

    The purpose of our investigation is to determine the effect of unsteadiness (not associated with turbulence) on rise time. The unsteadiness considered here is due to (1) geometrical spreading, (2) stratification, which includes variation in density, temperature, and relative humidity, and (3) N shaped waveform. A very general Burgers equation, which includes all these effects, is the propagation model for our study. The equation is solved by a new computational algorithm in which all the calculations are done in the time domain. The present paper is a progress report in which some of the factors contributing to unsteadiness are studied, namely geometrical spreading and variation in relative humidity. The work of Pierce and Kang, which motivated our study, is first reviewed. We proceed with a discussion of the Burgers equation model and the algorithm for solving the equation. Some comparison tests to establish the validity of the algorithm are presented. The algorithm is then used to determine the distance required for a steady-state shock, on encountering an abrupt change in relative humidity, to reach a new steady state based on the new humidity. It is found that the transition distance for plane shocks of amplitude 70 Pa is about 4 km when the change in relative humidity is 10 percent. Shocks of amplitude 140 Pa require less distance. The effect of spherical and cylindrical spreading is also considered. We demonstrate that a spreading shock wave never reaches steady state and that its rise time will be less than the equivalent steady state shock. Finally we show that an N wave has a slightly shorter rise time than a step shock of the same amplitude.

  10. Comparison of build-up region doses in oblique tangential 6 MV photon beams calculated by AAA and CCC algorithms in breast Rando phantom

    NASA Astrophysics Data System (ADS)

    Masunun, P.; Tangboonduangjit, P.; Dumrongkijudom, N.

    2016-03-01

    The purpose of this study is to compare the build-up region doses on breast Rando phantom surface with the bolus covered, the doses in breast Rando phantom and also the doses in a lung that is the heterogeneous region by two algorithms. The AAA in Eclipse TPS and the collapsed cone convolution algorithm in Pinnacle treatment planning system were used to plan in tangential field technique with 6 MV photon beam at 200 cGy total doses in Breast Rando phantom with bolus covered (5 mm and 10 mm). TLDs were calibrated with Cobalt-60 and used to measure the doses in irradiation process. The results in treatment planning show that the doses in build-up region and the doses in breast phantom were closely matched in both algorithms which are less than 2% differences. However, overestimate of doses in a lung (L2) were found in AAA with 13.78% and 6.06% differences at 5 mm and 10 mm bolus thickness, respectively when compared with CCC algorithm. The TLD measurements show the underestimate in buildup region and in breast phantom but the doses in a lung (L2) were overestimated when compared with the doses in the two plannings at both thicknesses of the bolus.

  11. SU-F-T-600: Influence of Acuros XB and AAA Dose Calculation Algorithms On Plan Quality Metrics and Normal Lung Doses in Lung SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yaparpalvi, R; Mynampati, D; Kuo, H

    Purpose: To study the influence of superposition-beam model (AAA) and determinant-photon transport-solver (Acuros XB) dose calculation algorithms on the treatment plan quality metrics and on normal lung dose in Lung SBRT. Methods: Treatment plans of 10 Lung SBRT patients were randomly selected. Patients were prescribed to a total dose of 50-54Gy in 3–5 fractions (10?5 or 18?3). Doses were optimized accomplished with 6-MV using 2-arcs (VMAT). Doses were calculated using AAA algorithm with heterogeneity correction. For each plan, plan quality metrics in the categories- coverage, homogeneity, conformity and gradient were quantified. Repeat dosimetry for these AAA treatment plans was performedmore » using AXB algorithm with heterogeneity correction for same beam and MU parameters. Plan quality metrics were again evaluated and compared with AAA plan metrics. For normal lung dose, V{sub 20} and V{sub 5} to (Total lung- GTV) were evaluated. Results: The results are summarized in Supplemental Table 1. PTV volume was mean 11.4 (±3.3) cm{sup 3}. Comparing RTOG 0813 protocol criteria for conformality, AXB plans yielded on average, similar PITV ratio (individual PITV ratio differences varied from −9 to +15%), reduced target coverage (−1.6%) and increased R50% (+2.6%). Comparing normal lung doses, the lung V{sub 20} (+3.1%) and V{sub 5} (+1.5%) were slightly higher for AXB plans compared to AAA plans. High-dose spillage ((V105%PD - PTV)/ PTV) was slightly lower for AXB plans but the % low dose spillage (D2cm) was similar between the two calculation algorithms. Conclusion: AAA algorithm overestimates lung target dose. Routinely adapting to AXB for dose calculations in Lung SBRT planning may improve dose calculation accuracy, as AXB based calculations have been shown to be closer to Monte Carlo based dose predictions in accuracy and with relatively faster computational time. For clinical practice, revisiting dose-fractionation in Lung SBRT to correct for dose overestimates attributable to algorithm may very well be warranted.« less

  12. Sensitivity of NTCP parameter values against a change of dose calculation algorithm.

    PubMed

    Brink, Carsten; Berg, Martin; Nielsen, Morten

    2007-09-01

    Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis with those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.

  13. Sensitivity of NTCP parameter values against a change of dose calculation algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brink, Carsten; Berg, Martin; Nielsen, Morten

    2007-09-15

    Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis withmore » those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.« less

  14. Jamming protection of spread spectrum RFID system

    NASA Astrophysics Data System (ADS)

    Mazurek, Gustaw

    2006-10-01

    This paper presents a new transform-domain processing algorithm for rejection of narrowband interferences in RFID/DS-CDMA systems. The performance of the proposed algorithm has been verified via computer simulations. Implementation issues have been discussed. The algorithm can be implemented in the FPGA or DSP technology.

  15. SU-E-T-516: Dosimetric Validation of AcurosXB Algorithm in Comparison with AAA & CCC Algorithms for VMAT Technique.

    PubMed

    Kathirvel, M; Subramanian, V Sai; Arun, G; Thirumalaiswamy, S; Ramalingam, K; Kumar, S Ashok; Jagadeesh, K

    2012-06-01

    To dosimetrically validate AcurosXB algorithm for Volumetric Modulated Arc Therapy (VMAT) in comparison with standard clinical Anisotropic Analytic Algorithm(AAA) and Collapsed Cone Convolution(CCC) dose calculation algorithms. AcurosXB dose calculation algorithm is available with Varian Eclipse treatment planning system (V10). It uses grid-based Boltzmann equation solver to predict dose precisely in lesser time. This study was made to realize algorithms ability to predict dose accurately as its delivery for which five clinical cases each of Brain, Head&Neck, Thoracic, Pelvic and SBRT were taken. Verification plans were created on multicube phantom with iMatrixx-2D detector array and then dose prediction was done with AcurosXB, AAA & CCC (COMPASS System) algorithm and the same were delivered onto CLINAC-iX treatment machine. Delivered dose was captured in iMatrixx plane for all 25 plans. Measured dose was taken as reference to quantify the agreement between AcurosXB calculation algorithm against previously validated AAA and CCC algorithm. Gamma evaluation was performed with clinical criteria distance-to-agreement 3&2mm and dose difference 3&2% in omnipro-I'MRT software. Plans were evaluated in terms of correlation coefficient, quantitative area gamma and average gamma. Study shows good agreement between mean correlation 0.9979±0.0012, 0.9984±0.0009 & 0.9979±0.0011 for AAA, CCC & Acuros respectively. Mean area gamma for criteria 3mm/3% was found to be 98.80±1.04, 98.14±2.31, 98.08±2.01 and 2mm/2% was found to be 93.94±3.83, 87.17±10.54 & 92.36±5.46 for AAA, CCC & Acuros respectively. Mean average gamma for 3mm/3% was 0.26±0.07, 0.42±0.08, 0.28±0.09 and 2mm/2% was found to be 0.39±0.10, 0.64±0.11, 0.42±0.13 for AAA, CCC & Acuros respectively. This study demonstrated that the AcurosXB algorithm had a good agreement with the AAA & CCC in terms of dose prediction. In conclusion AcurosXB algorithm provides a valid, accurate and speedy alternative to AAA and CCC algorithms in a busy clinical environment. © 2012 American Association of Physicists in Medicine.

  16. Effect of deformable registration on the dose calculated in radiation therapy planning CT scans of lung cancer patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cunliffe, Alexandra R.; Armato, Samuel G.; White, Bradley

    2015-01-15

    Purpose: To characterize the effects of deformable image registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60 Gy, 2 Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pretherapy (4–75 days) CT scan and a treatment planning scan with an associated dose map were collected. To establish correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pretherapy scans then were coregistered with planning scans (and associated dose maps)more » using the demons deformable registration algorithm and two variants of the Fraunhofer MEVIS algorithm (“Fast” and “EMPIRE10”). Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from each of the three algorithms. The Euclidean distance between manually and automatically mapped landmark points (d{sub E}) and the absolute difference in planned dose (|ΔD|) were calculated. Using regression modeling, |ΔD| was modeled as a function of d{sub E}, dose (D), dose standard deviation (SD{sub dose}) in an eight-pixel neighborhood, and the registration algorithm used. Results: Over 1400 landmark point pairs were identified, with 58–93 (median: 84) points identified per patient. Average |ΔD| across patients was 3.5 Gy (range: 0.9–10.6 Gy). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, with an average d{sub E} across patients of 5.2 mm (compared with >7 mm for the other two algorithms). Consequently, average |ΔD| was also lowest using the Fraunhofer MEVIS EMPIRE10 algorithm. |ΔD| increased significantly as a function of d{sub E} (0.42 Gy/mm), D (0.05 Gy/Gy), SD{sub dose} (1.4 Gy/Gy), and the algorithm used (≤1 Gy). Conclusions: An average error of <4 Gy in radiation dose was introduced when points were mapped between CT scan pairs using deformable registration, with the majority of points yielding dose-mapping error <2 Gy (approximately 3% of the total prescribed dose). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, resulting in the smallest errors in mapped dose. Dose differences following registration increased significantly with increasing spatial registration errors, dose, and dose gradient (i.e., SD{sub dose}). This model provides a measurement of the uncertainty in the radiation dose when points are mapped between serial CT scans through deformable registration.« less

  17. Investigation of photon beam models in heterogeneous media of modern radiotherapy.

    PubMed

    Ding, W; Johnston, P N; Wong, T P Y; Bubb, I F

    2004-06-01

    This study investigates the performance of photon beam models in dose calculations involving heterogeneous media in modern radiotherapy. Three dose calculation algorithms implemented in the CMS FOCUS treatment planning system have been assessed and validated using ionization chambers, thermoluminescent dosimeters (TLDs) and film. The algorithms include the multigrid superposition (MGS) algorithm, fast Fourier Transform Convolution (FFTC) algorithm and Clarkson algorithm. Heterogeneous phantoms used in the study consist of air cavities, lung analogue and an anthropomorphic phantom. Depth dose distributions along the central beam axis for 6 MV and 10 MV photon beams with field sizes of 5 cm x 5 cm and 10 cm x 10 cm were measured in the air cavity phantoms and lung analogue phantom. Point dose measurements were performed in the anthropomorphic phantom. Calculated results with three dose calculation algorithms were compared with measured results. In the air cavity phantoms, the maximum dose differences between the algorithms and the measurements were found at the distal surface of the air cavity with a 10 MV photon beam and a 5 cm x 5 cm field size. The differences were 3.8%. 24.9% and 27.7% for the MGS. FFTC and Clarkson algorithms. respectively. Experimental measurements of secondary electron build-up range beyond the air cavity showed an increase with decreasing field size, increasing energy and increasing air cavity thickness. The maximum dose differences in the lung analogue with 5 cm x 5 cm field size were found to be 0.3%. 4.9% and 6.9% for the MGS. FFTC and Clarkson algorithms with a 6 MV photon beam and 0.4%. 6.3% and 9.1% with a 10 MV photon beam, respectively. In the anthropomorphic phantom, the dose differences between calculations using the MGS algorithm and measurements with TLD rods were less than +/-4.5% for 6 MV and 10 MV photon beams with 10 cm x 10 cm field size and 6 MV photon beam with 5 cm x 5 cm field size, and within +/-7.5% for 10 MV with 5 cm x 5 cm field size, respectively. The FFTC and Clarkson algorithms overestimate doses at all dose points in the lung of the anthropomorphic phantom. In conclusion, the MGS is the most accurate dose calculation algorithm of investigated photon beam models. It is strongly recommended for implementation in modern radiotherapy with multiple small fields when heterogeneous media are in the treatment fields.

  18. A TPS kernel for calculating survival vs. depth: distributions in a carbon radiotherapy beam, based on Katz's cellular Track Structure Theory.

    PubMed

    Waligórski, M P R; Grzanka, L; Korcyl, M; Olko, P

    2015-09-01

    An algorithm was developed of a treatment planning system (TPS) kernel for carbon radiotherapy in which Katz's Track Structure Theory of cellular survival (TST) is applied as its radiobiology component. The physical beam model is based on available tabularised data, prepared by Monte Carlo simulations of a set of pristine carbon beams of different input energies. An optimisation tool developed for this purpose is used to find the composition of pristine carbon beams of input energies and fluences which delivers a pre-selected depth-dose distribution profile over the spread-out Bragg peak (SOBP) region. Using an extrapolation algorithm, energy-fluence spectra of the primary carbon ions and of all their secondary fragments are obtained over regular steps of beam depths. To obtain survival vs. depth distributions, the TST calculation is applied to the energy-fluence spectra of the mixed field of primary ions and of their secondary products at the given beam depths. Katz's TST offers a unique analytical and quantitative prediction of cell survival in such mixed ion fields. By optimising the pristine beam composition to a published depth-dose profile over the SOBP region of a carbon beam and using TST model parameters representing the survival of CHO (Chinese Hamster Ovary) cells in vitro, it was possible to satisfactorily reproduce a published data set of CHO cell survival vs. depth measurements after carbon ion irradiation. The authors also show by a TST calculation that 'biological dose' is neither linear nor additive. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Three-Dimensional Electron Beam Dose Calculations.

    NASA Astrophysics Data System (ADS)

    Shiu, Almon Sowchee

    The MDAH pencil-beam algorithm developed by Hogstrom et al (1981) has been widely used in clinics for electron beam dose calculations for radiotherapy treatment planning. The primary objective of this research was to address several deficiencies of that algorithm and to develop an enhanced version. Two enhancements have been incorporated into the pencil-beam algorithm; one models fluence rather than planar fluence, and the other models the bremsstrahlung dose using measured beam data. Comparisons of the resulting calculated dose distributions with measured dose distributions for several test phantoms have been made. From these results it is concluded (1) that the fluence-based algorithm is more accurate to use for the dose calculation in an inhomogeneous slab phantom, and (2) the fluence-based calculation provides only a limited improvement to the accuracy the calculated dose in the region just downstream of the lateral edge of an inhomogeneity. The source of the latter inaccuracy is believed primarily due to assumptions made in the pencil beam's modeling of the complex phantom or patient geometry. A pencil-beam redefinition model was developed for the calculation of electron beam dose distributions in three dimensions. The primary aim of this redefinition model was to solve the dosimetry problem presented by deep inhomogeneities, which was the major deficiency of the enhanced version of the MDAH pencil-beam algorithm. The pencil-beam redefinition model is based on the theory of electron transport by redefining the pencil beams at each layer of the medium. The unique approach of this model is that all the physical parameters of a given pencil beam are characterized for multiple energy bins. Comparisons of the calculated dose distributions with measured dose distributions for a homogeneous water phantom and for phantoms with deep inhomogeneities have been made. From these results it is concluded that the redefinition algorithm is superior to the conventional, fluence-based, pencil-beam algorithm, especially in predicting the dose distribution downstream of a local inhomogeneity. The accuracy of this algorithm appears sufficient for clinical use, and the algorithm is structured for future expansion of the physical model if required for site specific treatment planning problems.

  20. Fast and accurate detection of spread source in large complex networks.

    PubMed

    Paluch, Robert; Lu, Xiaoyan; Suchecki, Krzysztof; Szymański, Bolesław K; Hołyst, Janusz A

    2018-02-06

    Spread over complex networks is a ubiquitous process with increasingly wide applications. Locating spread sources is often important, e.g. finding the patient one in epidemics, or source of rumor spreading in social network. Pinto, Thiran and Vetterli introduced an algorithm (PTVA) to solve the important case of this problem in which a limited set of nodes act as observers and report times at which the spread reached them. PTVA uses all observers to find a solution. Here we propose a new approach in which observers with low quality information (i.e. with large spread encounter times) are ignored and potential sources are selected based on the likelihood gradient from high quality observers. The original complexity of PTVA is O(N α ), where α ∈ (3,4) depends on the network topology and number of observers (N denotes the number of nodes in the network). Our Gradient Maximum Likelihood Algorithm (GMLA) reduces this complexity to O (N 2 log (N)). Extensive numerical tests performed on synthetic networks and real Gnutella network with limitation that id's of spreaders are unknown to observers demonstrate that for scale-free networks with such limitation GMLA yields higher quality localization results than PTVA does.

  1. Dynamics and control of diseases in networks with community structure.

    PubMed

    Salathé, Marcel; Jones, James H

    2010-04-08

    The dynamics of infectious diseases spread via direct person-to-person transmission (such as influenza, smallpox, HIV/AIDS, etc.) depends on the underlying host contact network. Human contact networks exhibit strong community structure. Understanding how such community structure affects epidemics may provide insights for preventing the spread of disease between communities by changing the structure of the contact network through pharmaceutical or non-pharmaceutical interventions. We use empirical and simulated networks to investigate the spread of disease in networks with community structure. We find that community structure has a major impact on disease dynamics, and we show that in networks with strong community structure, immunization interventions targeted at individuals bridging communities are more effective than those simply targeting highly connected individuals. Because the structure of relevant contact networks is generally not known, and vaccine supply is often limited, there is great need for efficient vaccination algorithms that do not require full knowledge of the network. We developed an algorithm that acts only on locally available network information and is able to quickly identify targets for successful immunization intervention. The algorithm generally outperforms existing algorithms when vaccine supply is limited, particularly in networks with strong community structure. Understanding the spread of infectious diseases and designing optimal control strategies is a major goal of public health. Social networks show marked patterns of community structure, and our results, based on empirical and simulated data, demonstrate that community structure strongly affects disease dynamics. These results have implications for the design of control strategies.

  2. Evaluation of six TPS algorithms in computing entrance and exit doses.

    PubMed

    Tan, Yun I; Metwaly, Mohamed; Glegg, Martin; Baggarley, Shaun; Elliott, Alex

    2014-05-08

    Entrance and exit doses are commonly measured in in vivo dosimetry for comparison with expected values, usually generated by the treatment planning system (TPS), to verify accuracy of treatment delivery. This report aims to evaluate the accuracy of six TPS algorithms in computing entrance and exit doses for a 6 MV beam. The algorithms tested were: pencil beam convolution (Eclipse PBC), analytical anisotropic algorithm (Eclipse AAA), AcurosXB (Eclipse AXB), FFT convolution (XiO Convolution), multigrid superposition (XiO Superposition), and Monte Carlo photon (Monaco MC). Measurements with ionization chamber (IC) and diode detector in water phantoms were used as a reference. Comparisons were done in terms of central axis point dose, 1D relative profiles, and 2D absolute gamma analysis. Entrance doses computed by all TPS algorithms agreed to within 2% of the measured values. Exit doses computed by XiO Convolution, XiO Superposition, Eclipse AXB, and Monaco MC agreed with the IC measured doses to within 2%-3%. Meanwhile, Eclipse PBC and Eclipse AAA computed exit doses were higher than the IC measured doses by up to 5.3% and 4.8%, respectively. Both algorithms assume that full backscatter exists even at the exit level, leading to an overestimation of exit doses. Despite good agreements at the central axis for Eclipse AXB and Monaco MC, 1D relative comparisons showed profiles mismatched at depths beyond 11.5 cm. Overall, the 2D absolute gamma (3%/3 mm) pass rates were better for Monaco MC, while Eclipse AXB failed mostly at the outer 20% of the field area. The findings of this study serve as a useful baseline for the implementation of entrance and exit in vivo dosimetry in clinical departments utilizing any of these six common TPS algorithms for reference comparison.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maurer, J; Sintay, B; Manning, M

    Purpose: This study evaluates a novel algorithm that can be used with any treatment planning system for simple and rapid generation of stereotactic radiosurgery (SRS) plans for treating multiple brain metastases using a single isocenter dynamic conformal arc (DCA) approach. This technique is compared with a single isocenter volumetric modulated arc therapy (VMAT) technique in terms of delivery time, conformity, low dose spread and delivery accuracy. Methods: Five patients, with a total of 37 (5 – 11) targets were planned using a previously published method for generating optimal VMAT plans and using the proposed DCA algorithm. All planning target volumesmore » (PTVs) were planned to 20 Gy, meeting a minimum 99% coverage and maximum 135 % hot spot for both techniques. Quality assurance was performed using radiochromic film, with films placed in the high dose regions of each PTV. Normal tissue volumes receiving 12 Gy and 6 Gy (V12 and V6) were computed for each plan. Conformity index (CI) and gamma evaluations (95% of points passing 4%/0.5mm) were computed for each PTV. Results: Delivery times, including beam on and table rotation times, were comparable: 17 – 22 minutes for all deliveries. V12s for DCA plans were (18.5±15.2 cc) vs. VMAT (19.7±14.4 cc). V6s were significantly lower for DCA (69.0±52.0 cc) compared with VMAT (154.0±91.0 cc) (p <<0.05). CIs for VMAT targets were (1.38±0.50) vs. DCA (1.61±0.41). 36 of 37 DCA planned targets passed gamma tests, while 29 of 37 VMAT planned targets passed. Conclusion: Single isocenter DCA plans were easily achieved. The evaluation suggests that DCA may represent a favorable technique compared with VMAT for multiple target SRS by reducing dose to normal tissue and more accurately depicting deliverable dose.« less

  4. Sci-Thur AM: YIS – 06: A Monte Carlo study of macro- and microscopic dose descriptors and the microdosimetric spread using detailed cellular models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliver, Patricia; Thomson, Rowan

    2016-08-15

    Purpose: To develop Monte Carlo models of cell clusters to investigate the relationships between macro- and microscopic dose descriptors, quantify the microdosimetric spread in energy deposition for subcellular targets, and determine how these results depend on the computational model. Methods: Microscopic tissue structure is modelled as clusters of 13 to 150 cells, with cell (nuclear) radii between 5 and 10 microns (2 and 9 microns). Energy imparted per unit mass (specific energy or dose) is scored in the nucleus (D{sub nuc}) and cytoplasm (D{sub cyt}) for incident photon energies from 20 to 370 keV. Dose-to-water (D{sub w,m}) and dose-to-medium (D{submore » m,m}) are compared to D{sub nuc} and D{sub cyt}. Single cells and single nuclear cavities are also simulated. Results: D{sub nuc} and D{sub cyt} are sensitive to the surrounding environment with deviations of up to 13% for a single nucleus/cell compared with a multicellular cluster. These dose descriptors vary with cell and nucleus size by up to 10%. D{sub nuc} and D{sub cyt} differ from D{sub w,m} and D{sub m,m} by up to 32%. The microdosimetric spread is sensitive to whether cells are arranged randomly or in a hexagonal lattice, and whether subcellular compartment sizes are sampled from a normal distribution or are constant throughout the cluster. Conclusions: D{sub nuc} and D{sub cyt} are sensitive to cell morphology, elemental composition and the presence of surrounding cells. The microdosimetric spread was investigated using realistic elemental compositions for the nucleus and cytoplasm, and depends strongly on subcellular compartment size, source energy and dose.« less

  5. Application of Deconvolution Algorithm of Point Spread Function in Improving Image Quality: An Observer Preference Study on Chest Radiography.

    PubMed

    Chae, Kum Ju; Goo, Jin Mo; Ahn, Su Yeon; Yoo, Jin Young; Yoon, Soon Ho

    2018-01-01

    To evaluate the preference of observers for image quality of chest radiography using the deconvolution algorithm of point spread function (PSF) (TRUVIEW ART algorithm, DRTECH Corp.) compared with that of original chest radiography for visualization of anatomic regions of the chest. Prospectively enrolled 50 pairs of posteroanterior chest radiographs collected with standard protocol and with additional TRUVIEW ART algorithm were compared by four chest radiologists. This algorithm corrects scattered signals generated by a scintillator. Readers independently evaluated the visibility of 10 anatomical regions and overall image quality with a 5-point scale of preference. The significance of the differences in reader's preference was tested with a Wilcoxon's signed rank test. All four readers preferred the images applied with the algorithm to those without algorithm for all 10 anatomical regions (mean, 3.6; range, 3.2-4.0; p < 0.001) and for the overall image quality (mean, 3.8; range, 3.3-4.0; p < 0.001). The most preferred anatomical regions were the azygoesophageal recess, thoracic spine, and unobscured lung. The visibility of chest anatomical structures applied with the deconvolution algorithm of PSF was superior to the original chest radiography.

  6. Short-term reproducibility of computed tomography-based lung density measurements in alpha-1 antitrypsin deficiency and smokers with emphysema.

    PubMed

    Shaker, S B; Dirksen, A; Laursen, L C; Maltbaek, N; Christensen, L; Sander, U; Seersholm, N; Skovgaard, L T; Nielsen, L; Kok-Jensen, A

    2004-07-01

    To study the short-term reproducibility of lung density measurements by multi-slice computed tomography (CT) using three different radiation doses and three reconstruction algorithms. Twenty-five patients with smoker's emphysema and 25 patients with alpha1-antitrypsin deficiency underwent 3 scans at 2-week intervals. Low-dose protocol was applied, and images were reconstructed with bone, detail, and soft algorithms. Total lung volume (TLV), 15th percentile density (PD-15), and relative area at -910 Hounsfield units (RA-910) were obtained from the images using Pulmo-CMS software. Reproducibility of PD-15 and RA-910 and the influence of radiation dose, reconstruction algorithm, and type of emphysema were then analysed. The overall coefficient of variation of volume adjusted PD-15 for all combinations of radiation dose and reconstruction algorithm was 3.7%. The overall standard deviation of volume-adjusted RA-910 was 1.7% (corresponding to a coefficient of variation of 6.8%). Radiation dose, reconstruction algorithm, and type of emphysema had no significant influence on the reproducibility of PD-15 and RA-910. However, bone algorithm and very low radiation dose result in overestimation of the extent of emphysema. Lung density measurement by CT is a sensitive marker for quantitating both subtypes of emphysema. A CT-protocol with radiation dose down to 16 mAs and soft or detail reconstruction algorithm is recommended.

  7. Dosimetric comparison of lung stereotactic body radiotherapy treatment plans using averaged computed tomography and end-exhalation computed tomography images: Evaluation of the effect of different dose-calculation algorithms and prescription methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitsuyoshi, Takamasa; Nakamura, Mitsuhiro, E-mail: m_nkmr@kuhp.kyoto-u.ac.jp; Matsuo, Yukinori

    The purpose of this article is to quantitatively evaluate differences in dose distributions calculated using various computed tomography (CT) datasets, dose-calculation algorithms, and prescription methods in stereotactic body radiotherapy (SBRT) for patients with early-stage lung cancer. Data on 29 patients with early-stage lung cancer treated with SBRT were retrospectively analyzed. Averaged CT (Ave-CT) and expiratory CT (Ex-CT) images were reconstructed for each patient using 4-dimensional CT data. Dose distributions were initially calculated using the Ave-CT images and recalculated (in the same monitor units [MUs]) by employing Ex-CT images with the same beam arrangements. The dose-volume parameters, including D{sub 95}, D{submore » 90}, D{sub 50}, and D{sub 2} of the planning target volume (PTV), were compared between the 2 image sets. To explore the influence of dose-calculation algorithms and prescription methods on the differences in dose distributions evident between Ave-CT and Ex-CT images, we calculated dose distributions using the following 3 different algorithms: x-ray Voxel Monte Carlo (XVMC), Acuros XB (AXB), and the anisotropic analytical algorithm (AAA). We also used 2 different dose-prescription methods; the isocenter prescription and the PTV periphery prescription methods. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data were within 3 percentage points (%pts) employing the isocenter prescription method, and within 1.5%pts using the PTV periphery prescription method, irrespective of which of the 3 algorithms (XVMC, AXB, and AAA) was employed. The frequencies of dose-volume parameters differing by >1%pt when the XVMC and AXB were used were greater than those associated with the use of the AAA, regardless of the dose-prescription method employed. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data on patients who underwent lung SBRT were within 3%pts, regardless of the dose-calculation algorithm or the dose-prescription method employed.« less

  8. Potential of discrete Gaussian edge feathering method for improving abutment dosimetry in eMLC-delivered segmented-field electron conformal therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eley, John G.; Hogstrom, Kenneth R.; Matthews, Kenneth L.

    2011-12-15

    Purpose: The purpose of this work was to investigate the potential of discrete Gaussian edge feathering of the higher energy electron fields for improving abutment dosimetry in the planning volume when using an electron multileaf collimator (eMLC) to deliver segmented-field electron conformal therapy (ECT). Methods: A discrete (five-step) Gaussian edge spread function was used to match dose penumbras of differing beam energies (6-20 MeV) at a specified depth in a water phantom. Software was developed to define the leaf eMLC positions of an eMLC that most closely fit each electron field shape. The effect of 1D edge feathering of themore » higher energy field on dose homogeneity was computed and measured for segmented-field ECT treatment plans for three 2D PTVs in a water phantom, i.e., depth from the water surface to the distal PTV surface varied as a function of the x-axis (parallel to leaf motion) and remained constant along the y-axis (perpendicular to leaf motion). Additionally, the effect of 2D edge feathering was computed and measured for one radially symmetric, 3D PTV in a water phantom, i.e., depth from the water surface to the distal PTV surface varied as a function of both axes. For the 3D PTV, the feathering scheme was evaluated for 0.1-1.0-cm leaf widths. Dose calculations were performed using the pencil beam dose algorithm in the Pinnacle{sup 3} treatment planning system. Dose verification measurements were made using a prototype eMLC (1-cm leaf width). Results: 1D discrete Gaussian edge feathering reduced the standard deviation of dose in the 2D PTVs by 34, 34, and 39%. In the 3D PTV, the broad leaf width (1 cm) of the eMLC hindered the 2D application of the feathering solution to the 3D PTV, and the standard deviation of dose increased by 10%. However, 2D discrete Gaussian edge feathering with simulated eMLC leaf widths of 0.1-0.5 cm reduced the standard deviation of dose in the 3D PTV by 33-28%, respectively. Conclusions: A five-step discrete Gaussian edge spread function applied in 2D improves the abutment dosimetry but requires an eMLC leaf resolution better than 1 cm.« less

  9. Spread F in the Midlatitude Ionosphere According to DPS-4 Ionosonde Data

    NASA Astrophysics Data System (ADS)

    Panchenko, V. A.; Telegin, V. A.; Vorob'ev, V. G.; Zhbankov, G. A.; Yagodkina, O. I.; Rozhdestvenskaya, V. I.

    2018-03-01

    The results of studying spread F obtained from the DPS-4 ionosonde data at the observatory of the Pushkov Institute of Terrestrial Magnetism, Ionosphere, and Radio Wave Propagation (Moscow) are presented. The methodical questions that arise during the study of a spread F phenomenon in the ionosphere are considered; the current results of terrestrial observations are compared with previously published data and the results of sounding onboard an Earth-satellite vehicle. The automated algorithm for estimation of the intensity of frequency spread F, which was developed by the authors and was successfully verified via comparison of the data of the digisonde DPS-4 and the results of manual processing, is described. The algorithm makes it possible to quantify the intensity of spread F in megahertz (the dFs parameter) and in the number of points (0, 1, 2, 3). The strongest spread (3 points) is shown to be most likely around midnight, while the weakest spread (0 points) is highly likely to occur during the daytime. The diurnal distribution of a 1-2 point spread F in the winter indicates the presence of additional maxima at 0300-0600 UT and 1400-1700 UT, which may appear due to the terminator. Despite the large volume of processed data, we can not definitively state that the appearance of spread F depends on the magnetic activity indices Kp, Dst, and AL, although the values of the dFs frequency spread interval strongly increased both at day and night during the magnetic storm of March 17-22, 2015, especially in the phase of storm recovery on March 20-22.

  10. In vivo verification of radiation dose delivered to healthy tissue during radiotherapy for breast cancer

    NASA Astrophysics Data System (ADS)

    Lonski, P.; Taylor, M. L.; Hackworth, W.; Phipps, A.; Franich, R. D.; Kron, T.

    2014-03-01

    Different treatment planning system (TPS) algorithms calculate radiation dose in different ways. This work compares measurements made in vivo to the dose calculated at out-of-field locations using three different commercially available algorithms in the Eclipse treatment planning system. LiF: Mg, Cu, P thermoluminescent dosimeter (TLD) chips were placed with 1 cm build-up at six locations on the contralateral side of 5 patients undergoing radiotherapy for breast cancer. TLD readings were compared to calculations of Pencil Beam Convolution (PBC), Anisotropic Analytical Algorithm (AAA) and Acuros XB (XB). AAA predicted zero dose at points beyond 16 cm from the field edge. In the same region PBC returned an unrealistically constant result independent of distance and XB showed good agreement to measured data although consistently underestimated by ~0.1 % of the prescription dose. At points closer to the field edge XB was the superior algorithm, exhibiting agreement with TLD results to within 15 % of measured dose. Both AAA and PBC showed mixed agreement, with overall discrepancies considerably greater than XB. While XB is certainly the preferable algorithm, it should be noted that TPS algorithms in general are not designed to calculate dose at peripheral locations and calculation results in such regions should be treated with caution.

  11. Midline Dose Verification with Diode In Vivo Dosimetry for External Photon Therapy of Head and Neck and Pelvis Cancers During Initial Large-Field Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tung, Chuan-Jong; Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan; Yu, Pei-Chieh

    2010-01-01

    During radiotherapy treatments, quality assurance/control is essential, particularly dose delivery to patients. This study was designed to verify midline doses with diode in vivo dosimetry. Dosimetry was studied for 6-MV bilateral fields in head and neck cancer treatments and 10-MV bilateral and anteroposterior/posteroanterior (AP/PA) fields in pelvic cancer treatments. Calibrations with corrections of diodes were performed using plastic water phantoms; 190 and 100 portals were studied for head and neck and pelvis treatments, respectively. Calculations of midline doses were made using the midline transmission, arithmetic mean, and geometric mean algorithms. These midline doses were compared with the treatment planning systemmore » target doses for lateral or AP (PA) portals and paired opposed portals. For head and neck treatments, all 3 algorithms were satisfactory, although the geometric mean algorithm was less accurate and more uncertain. For pelvis treatments, the arithmetic mean algorithm seemed unacceptable, whereas the other algorithms were satisfactory. The random error was reduced by using averaged midline doses of paired opposed portals because the asymmetric effect was averaged out. Considering the simplicity of in vivo dosimetry, the arithmetic mean and geometric mean algorithm should be adopted for head/neck and pelvis treatments, respectively.« less

  12. SU-E-T-454: Dosimetric Comparison between Pencil Beam and Monte Carlo Algorithms for SBRT Lung Treatment Using IPlan V4.1 TPS and CIRS Thorax Phantom.

    PubMed

    Fernandez, M Castrillon; Venencia, C; Garrigó, E; Caussa, L

    2012-06-01

    To compare measured and calculated doses using Pencil Beam (PB) and Monte Carlo (MC) algorithm on a CIRS thorax phantom for SBRT lung treatments. A 6MV photon beam generated by a Primus linac with an Optifocus MLC (Siemens) was used. Dose calculation was done using iPlan v4.1.2 TPS (BrainLAB) by PB and MC (dose to water and dose to medium) algorithms. The commissioning of both algorithms was done reproducing experimental measurements in water. A CIRS thorax phantom was used to compare doses using a Farmer type ion chamber (PTW) and EDR2 radiographic films (KODAK). The ionization chamber, into a tissue equivalent insert, was placed in two position of lung tissue and was irradiated using three treatments plans. Axial dose distributions were measured for four treatments plans using conformal and IMRT technique. Dose distribution comparisons were done by dose profiles and gamma index (3%/3mm). For the studied beam configurations, ion chamber measurements shows that PB overestimate the dose up to 8.5%, whereas MC has a maximum variation of 1.6%. Dosimetric analysis using dose profiles shows that PB overestimates the dose in the region corresponding to the lung up to 16%. For axial dose distribution comparison the percentage of pixels with gamma index bigger than one for MC and PB was, plan 1: 95.6% versus 87.4%, plan 2: 91.2% versus 77.6%, plan 3: 99.7% versus 93.1% and for plan 4: 98.8% versus 91.7%. It was confirmed that the lower dosimetric errors calculated applying MC algorithm appears when the spatial resolution and variance decrease at the expense of increased computation time. The agreement between measured and calculated doses, in a phantom with lung heterogeneities, is better with MC algorithm. PB algorithm overestimates the doses in lung tissue, which could have a clinical impact in SBRT lung treatments. © 2012 American Association of Physicists in Medicine.

  13. Comparison between variable and fixed dwell-time PN acquisition algorithms. [for synchronization in pseudonoise spread spectrum systems

    NASA Technical Reports Server (NTRS)

    Braun, W. R.

    1981-01-01

    Pseudo noise (PN) spread spectrum systems require a very accurate alignment between the PN code epochs at the transmitter and receiver. This synchronism is typically established through a two-step algorithm, including a coarse synchronization procedure and a fine synchronization procedure. A standard approach for the coarse synchronization is a sequential search over all code phases. The measurement of the power in the filtered signal is used to either accept or reject the code phase under test as the phase of the received PN code. This acquisition strategy, called a single dwell-time system, has been analyzed by Holmes and Chen (1977). A synopsis of the field of sequential analysis as it applies to the PN acquisition problem is provided. From this, the implementation of the variable dwell time algorithm as a sequential probability ratio test is developed. The performance of this algorithm is compared to the optimum detection algorithm and to the fixed dwell-time system.

  14. Evidence-based algorithm for heparin dosing before cardiopulmonary bypass. Part 1: Development of the algorithm.

    PubMed

    McKinney, Mark C; Riley, Jeffrey B

    2007-12-01

    The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time < 450 seconds with > 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.

  15. Automated Phase Segmentation for Large-Scale X-ray Diffraction Data Using a Graph-Based Phase Segmentation (GPhase) Algorithm.

    PubMed

    Xiong, Zheng; He, Yinyan; Hattrick-Simpers, Jason R; Hu, Jianjun

    2017-03-13

    The creation of composition-processing-structure relationships currently represents a key bottleneck for data analysis for high-throughput experimental (HTE) material studies. Here we propose an automated phase diagram attribution algorithm for HTE data analysis that uses a graph-based segmentation algorithm and Delaunay tessellation to create a crystal phase diagram from high throughput libraries of X-ray diffraction (XRD) patterns. We also propose the sample-pair based objective evaluation measures for the phase diagram prediction problem. Our approach was validated using 278 diffraction patterns from a Fe-Ga-Pd composition spread sample with a prediction precision of 0.934 and a Matthews Correlation Coefficient score of 0.823. The algorithm was then applied to the open Ni-Mn-Al thin-film composition spread sample to obtain the first predicted phase diagram mapping for that sample.

  16. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  17. Blur kernel estimation with algebraic tomography technique and intensity profiles of object boundaries

    NASA Astrophysics Data System (ADS)

    Ingacheva, Anastasia; Chukalina, Marina; Khanipov, Timur; Nikolaev, Dmitry

    2018-04-01

    Motion blur caused by camera vibration is a common source of degradation in photographs. In this paper we study the problem of finding the point spread function (PSF) of a blurred image using the tomography technique. The PSF reconstruction result strongly depends on the particular tomography technique used. We present a tomography algorithm with regularization adapted specifically for this task. We use the algebraic reconstruction technique (ART algorithm) as the starting algorithm and introduce regularization. We use the conjugate gradient method for numerical implementation of the proposed approach. The algorithm is tested using a dataset which contains 9 kernels extracted from real photographs by the Adobe corporation where the point spread function is known. We also investigate influence of noise on the quality of image reconstruction and investigate how the number of projections influence the magnitude change of the reconstruction error.

  18. TU-D-209-02: A Backscatter Point Spread Function for Entrance Skin Dose Determination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vijayan, S; Xiong, Z; Shankar, A

    Purpose: To determine the distribution of backscattered radiation to the skin resulting from a non-uniform distribution of primary radiation through convolution with a backscatter point spread function (PSF). Methods: A backscatter PSF is determined using Monte Carlo simulation of a 1 mm primary beam incident on a 30 × 30 cm × 20 cm thick PMMA phantom using EGSnrc software. A primary profile is similarly obtained without the phantom and the difference from the total provides the backscatter profile. This scatter PSF characterizes the backscatter spread for a “point” primary interaction and can be convolved with the entrance primary dosemore » distribution to obtain the total entrance skin dose. The backscatter PSF was integrated into the skin dose tracking system (DTS), a graphical utility for displaying the color-coded skin dose distribution on a 3D graphic of the patient during interventional fluoroscopic procedures. The backscatter convolution method was validated for the non-uniform beam resulting from the use of an ROI attenuator. The ROI attenuator is a copper sheet with about 20% primary transmission (0.7 mm thick) containing a circular aperture; this attenuator is placed in the beam to reduce dose in the periphery while maintaining full dose in the region of interest. The DTS calculated primary plus backscatter distribution is compared to that measured with GafChromic film and that calculated using EGSnrc Monte-Carlo software. Results: The PSF convolution method used in the DTS software was able to account for the spread of backscatter from the ROI region to the region under the attenuator. The skin dose distribution determined using DTS with the ROI attenuator was in good agreement with the distributions measured with Gafchromic film and determined by Monte Carlo simulation Conclusion: The PSF convolution technique provides an accurate alternative for entrance skin dose determination with non-uniform primary x-ray beams. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less

  19. Algorithms for the optimization of RBE-weighted dose in particle therapy.

    PubMed

    Horcicka, M; Meyer, C; Buschbacher, A; Durante, M; Krämer, M

    2013-01-21

    We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.

  20. Algorithms for the optimization of RBE-weighted dose in particle therapy

    NASA Astrophysics Data System (ADS)

    Horcicka, M.; Meyer, C.; Buschbacher, A.; Durante, M.; Krämer, M.

    2013-01-01

    We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.

  1. The Creating an Optimal Warfarin Nomogram (CROWN) Study

    PubMed Central

    Perlstein, Todd S.; Goldhaber, Samuel Z.; Nelson, Kerrie; Joshi, Victoria; Morgan, T. Vance; Lesko, Lawrence J.; Lee, Joo-Yeon; Gobburu, Jogarao; Schoenfeld, David; Kucherlapati, Raju; Freeman, Mason W.; Creager, Mark A.

    2014-01-01

    A significant proportion of warfarin dose variability is explained by variation in the genotypes of the cytochrome P450 CYP2C9 and the vitamin K epoxide reductase complex, VKORC1, enzymes that influence warfarin metabolism and sensitivity, respectively. We sought to develop an optimal pharmacogenetic warfarin dosing algorithm that incorporated clinical and genetic information. We enroled patients initiating warfarin therapy. Genotyping was performed of the VKORC1, –1639G>A, the CYP2C9*2, 430C>T, and the CYP2C9*3, 1075C>A genotypes. The initial warfarin dosing algorithm (Algorithm A) was based upon established clinical practice and published warfarin pharmacogenetic information. Subsequent dosing algorithms (Algorithms B and Algorithm C) were derived from pharmacokinetic / pharmacodynamic (PK/PD) modelling of warfarin dose, international normalised ratio (INR), clinical and genetic factors from patients treated by the preceding algorithm(s). The primary outcome was the time in the therapeutic range, considered an INR of 1.8 to 3.2. A total of 344 subjects are included in the study analyses. The mean percentage time within the therapeutic range for each subject increased progressively from Algorithm A to Algorithm C from 58.9 (22.0), to 59.7 (23.0), to 65.8 (16.9) percent (p = 0.04). Improvement also occurred in most secondary endpoints, which included the per-patient percentage of INRs outside of the therapeutic range (p = 0.004), the time to the first therapeutic INR (p = 0.07), and the time to achieve stable therapeutic anticoagulation (p < 0.001). In conclusion, warfarin pharmacogenetic dosing can be optimised in real time utilising observed PK/PD information in an adaptive fashion. Clinical Trial Registration ClinicalTrials.gov (NCT00401414) PMID:22116191

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serin, E.; Codel, G.; Mabhouti, H.

    Purpose: In small field geometries, the electronic equilibrium can be lost, making it challenging for the dose-calculation algorithm to accurately predict the dose, especially in the presence of tissue heterogeneities. In this study, dosimetric accuracy of Monte Carlo (MC) advanced dose calculation and sequential algorithms of Multiplan treatment planning system were investigated for small radiation fields incident on homogeneous and heterogeneous geometries. Methods: Small open fields of fixed cones of Cyberknife M6 unit 100 to 500 mm2 were used for this study. The fields were incident on in house phantom containing lung, air, and bone inhomogeneities and also homogeneous phantom.more » Using the same film batch, the net OD to dose calibration curve was obtained using CK with the 60 mm fixed cone by delivering 0- 800 cGy. Films were scanned 48 hours after irradiation using an Epson 1000XL flatbed scanner. The dosimetric accuracy of MC and sequential algorithms in the presence of the inhomogeneities was compared against EBT3 film dosimetry Results: Open field tests in a homogeneous phantom showed good agreement between two algorithms and film measurement For MC algorithm, the minimum gamma analysis passing rates between measured and calculated dose distributions were 99.7% and 98.3% for homogeneous and inhomogeneous fields in the case of lung and bone respectively. For sequential algorithm, the minimum gamma analysis passing rates were 98.9% and 92.5% for for homogeneous and inhomogeneous fields respectively for used all cone sizes. In the case of the air heterogeneity, the differences were larger for both calculation algorithms. Overall, when compared to measurement, the MC had better agreement than sequential algorithm. Conclusion: The Monte Carlo calculation algorithm in the Multiplan treatment planning system is an improvement over the existing sequential algorithm. Dose discrepancies were observed for in the presence of air inhomogeneities.« less

  3. Performance of dose calculation algorithms from three generations in lung SBRT: comparison with full Monte Carlo‐based dose distributions

    PubMed Central

    Kapanen, Mika K.; Hyödynmaa, Simo J.; Wigren, Tuija K.; Pitkänen, Maunu A.

    2014-01-01

    The accuracy of dose calculation is a key challenge in stereotactic body radiotherapy (SBRT) of the lung. We have benchmarked three photon beam dose calculation algorithms — pencil beam convolution (PBC), anisotropic analytical algorithm (AAA), and Acuros XB (AXB) — implemented in a commercial treatment planning system (TPS), Varian Eclipse. Dose distributions from full Monte Carlo (MC) simulations were regarded as a reference. In the first stage, for four patients with central lung tumors, treatment plans using 3D conformal radiotherapy (CRT) technique applying 6 MV photon beams were made using the AXB algorithm, with planning criteria according to the Nordic SBRT study group. The plans were recalculated (with same number of monitor units (MUs) and identical field settings) using BEAMnrc and DOSXYZnrc MC codes. The MC‐calculated dose distributions were compared to corresponding AXB‐calculated dose distributions to assess the accuracy of the AXB algorithm, to which then other TPS algorithms were compared. In the second stage, treatment plans were made for ten patients with 3D CRT technique using both the PBC algorithm and the AAA. The plans were recalculated (with same number of MUs and identical field settings) with the AXB algorithm, then compared to original plans. Throughout the study, the comparisons were made as a function of the size of the planning target volume (PTV), using various dose‐volume histogram (DVH) and other parameters to quantitatively assess the plan quality. In the first stage also, 3D gamma analyses with threshold criteria 3%/3 mm and 2%/2 mm were applied. The AXB‐calculated dose distributions showed relatively high level of agreement in the light of 3D gamma analysis and DVH comparison against the full MC simulation, especially with large PTVs, but, with smaller PTVs, larger discrepancies were found. Gamma agreement index (GAI) values between 95.5% and 99.6% for all the plans with the threshold criteria 3%/3 mm were achieved, but 2%/2 mm threshold criteria showed larger discrepancies. The TPS algorithm comparison results showed large dose discrepancies in the PTV mean dose (D50%), nearly 60%, for the PBC algorithm, and differences of nearly 20% for the AAA, occurring also in the small PTV size range. This work suggests the application of independent plan verification, when the AAA or the AXB algorithm are utilized in lung SBRT having PTVs smaller than 20‐25 cc. The calculated data from this study can be used in converting the SBRT protocols based on type ‘a’ and/or type ‘b’ algorithms for the most recent generation type ‘c’ algorithms, such as the AXB algorithm. PACS numbers: 87.55.‐x, 87.55.D‐, 87.55.K‐, 87.55.kd, 87.55.Qr PMID:24710454

  4. SU-E-T-371: Evaluating the Convolution Algorithm of a Commercially Available Radiosurgery Irradiator Using a Novel Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cates, J; Drzymala, R

    2015-06-15

    Purpose: The purpose of this study was to develop and use a novel phantom to evaluate the accuracy and usefulness of the Leskell Gamma Plan convolution-based dose calculation algorithm compared with the current TMR10 algorithm. Methods: A novel phantom was designed to fit the Leskell Gamma Knife G Frame which could accommodate various materials in the form of one inch diameter, cylindrical plugs. The plugs were split axially to allow EBT2 film placement. Film measurements were made during two experiments. The first utilized plans generated on a homogeneous acrylic phantom setup using the TMR10 algorithm, with various materials inserted intomore » the phantom during film irradiation to assess the effect on delivered dose due to unplanned heterogeneities upstream in the beam path. The second experiment utilized plans made on CT scans of different heterogeneous setups, with one plan using the TMR10 dose calculation algorithm and the second using the convolution-based algorithm. Materials used to introduce heterogeneities included air, LDPE, polystyrene, Delrin, Teflon, and aluminum. Results: The data shows that, as would be expected, having heterogeneities in the beam path does induce dose delivery error when using the TMR10 algorithm, with the largest errors being due to the heterogeneities with electron densities most different from that of water, i.e. air, Teflon, and aluminum. Additionally, the Convolution algorithm did account for the heterogeneous material and provided a more accurate predicted dose, in extreme cases up to a 7–12% improvement over the TMR10 algorithm. The convolution algorithm expected dose was accurate to within 3% in all cases. Conclusion: This study proves that the convolution algorithm is an improvement over the TMR10 algorithm when heterogeneities are present. More work is needed to determine what the heterogeneity size/volume limits are where this improvement exists, and in what clinical and/or research cases this would be relevant.« less

  5. A Novel Admixture-Based Pharmacogenetic Approach to Refine Warfarin Dosing in Caribbean Hispanics.

    PubMed

    Duconge, Jorge; Ramos, Alga S; Claudio-Campos, Karla; Rivera-Miranda, Giselle; Bermúdez-Bosch, Luis; Renta, Jessicca Y; Cadilla, Carmen L; Cruz, Iadelisse; Feliu, Juan F; Vergara, Cunegundo; Ruaño, Gualberto

    2016-01-01

    This study is aimed at developing a novel admixture-adjusted pharmacogenomic approach to individually refine warfarin dosing in Caribbean Hispanic patients. A multiple linear regression analysis of effective warfarin doses versus relevant genotypes, admixture, clinical and demographic factors was performed in 255 patients and further validated externally in another cohort of 55 individuals. The admixture-adjusted, genotype-guided warfarin dosing refinement algorithm developed in Caribbean Hispanics showed better predictability (R2 = 0.70, MAE = 0.72mg/day) than a clinical algorithm that excluded genotypes and admixture (R2 = 0.60, MAE = 0.99mg/day), and outperformed two prior pharmacogenetic algorithms in predicting effective dose in this population. For patients at the highest risk of adverse events, 45.5% of the dose predictions using the developed pharmacogenetic model resulted in ideal dose as compared with only 29% when using the clinical non-genetic algorithm (p<0.001). The admixture-driven pharmacogenetic algorithm predicted 58% of warfarin dose variance when externally validated in 55 individuals from an independent validation cohort (MAE = 0.89 mg/day, 24% mean bias). Results supported our rationale to incorporate individual's genotypes and unique admixture metrics into pharmacogenetic refinement models in order to increase predictability when expanding them to admixed populations like Caribbean Hispanics. ClinicalTrials.gov NCT01318057.

  6. A comparison of two dose calculation algorithms-anisotropic analytical algorithm and Acuros XB-for radiation therapy planning of canine intranasal tumors.

    PubMed

    Nagata, Koichi; Pethel, Timothy D

    2017-07-01

    Although anisotropic analytical algorithm (AAA) and Acuros XB (AXB) are both radiation dose calculation algorithms that take into account the heterogeneity within the radiation field, Acuros XB is inherently more accurate. The purpose of this retrospective method comparison study was to compare them and evaluate the dose discrepancy within the planning target volume (PTV). Radiation therapy (RT) plans of 11 dogs with intranasal tumors treated by radiation therapy at the University of Georgia were evaluated. All dogs were planned for intensity-modulated radiation therapy using nine coplanar X-ray beams that were equally spaced, then dose calculated with anisotropic analytical algorithm. The same plan with the same monitor units was then recalculated using Acuros XB for comparisons. Each dog's planning target volume was separated into air, bone, and tissue and evaluated. The mean dose to the planning target volume estimated by Acuros XB was 1.3% lower. It was 1.4% higher for air, 3.7% lower for bone, and 0.9% lower for tissue. The volume of planning target volume covered by the prescribed dose decreased by 21% when Acuros XB was used due to increased dose heterogeneity within the planning target volume. Anisotropic analytical algorithm relatively underestimates the dose heterogeneity and relatively overestimates the dose to the bone and tissue within the planning target volume for the radiation therapy planning of canine intranasal tumors. This can be clinically significant especially if the tumor cells are present within the bone, because it may result in relative underdosing of the tumor. © 2017 American College of Veterinary Radiology.

  7. Evaluation of six TPS algorithms in computing entrance and exit doses

    PubMed Central

    Metwaly, Mohamed; Glegg, Martin; Baggarley, Shaun P.; Elliott, Alex

    2014-01-01

    Entrance and exit doses are commonly measured in in vivo dosimetry for comparison with expected values, usually generated by the treatment planning system (TPS), to verify accuracy of treatment delivery. This report aims to evaluate the accuracy of six TPS algorithms in computing entrance and exit doses for a 6 MV beam. The algorithms tested were: pencil beam convolution (Eclipse PBC), analytical anisotropic algorithm (Eclipse AAA), AcurosXB (Eclipse AXB), FFT convolution (XiO Convolution), multigrid superposition (XiO Superposition), and Monte Carlo photon (Monaco MC). Measurements with ionization chamber (IC) and diode detector in water phantoms were used as a reference. Comparisons were done in terms of central axis point dose, 1D relative profiles, and 2D absolute gamma analysis. Entrance doses computed by all TPS algorithms agreed to within 2% of the measured values. Exit doses computed by XiO Convolution, XiO Superposition, Eclipse AXB, and Monaco MC agreed with the IC measured doses to within 2%‐3%. Meanwhile, Eclipse PBC and Eclipse AAA computed exit doses were higher than the IC measured doses by up to 5.3% and 4.8%, respectively. Both algorithms assume that full backscatter exists even at the exit level, leading to an overestimation of exit doses. Despite good agreements at the central axis for Eclipse AXB and Monaco MC, 1D relative comparisons showed profiles mismatched at depths beyond 11.5 cm. Overall, the 2D absolute gamma (3%/3 mm) pass rates were better for Monaco MC, while Eclipse AXB failed mostly at the outer 20% of the field area. The findings of this study serve as a useful baseline for the implementation of entrance and exit in vivo dosimetry in clinical departments utilizing any of these six common TPS algorithms for reference comparison. PACS numbers: 87.55.‐x, 87.55.D‐, 87.55.N‐, 87.53.Bn PMID:24892349

  8. Agreement between gamma passing rates using computed tomography in radiotherapy and secondary cancer risk prediction from more advanced dose calculated models

    PubMed Central

    Balosso, Jacques

    2017-01-01

    Background During the past decades, in radiotherapy, the dose distributions were calculated using density correction methods with pencil beam as type ‘a’ algorithm. The objectives of this study are to assess and evaluate the impact of dose distribution shift on the predicted secondary cancer risk (SCR), using modern advanced dose calculation algorithms, point kernel, as type ‘b’, which consider change in lateral electrons transport. Methods Clinical examples of pediatric cranio-spinal irradiation patients were evaluated. For each case, two radiotherapy treatment plans with were generated using the same prescribed dose to the target resulting in different number of monitor units (MUs) per field. The dose distributions were calculated, respectively, using both algorithms types. A gamma index (γ) analysis was used to compare dose distribution in the lung. The organ equivalent dose (OED) has been calculated with three different models, the linear, the linear-exponential and the plateau dose response curves. The excess absolute risk ratio (EAR) was also evaluated as (EAR = OED type ‘b’ / OED type ‘a’). Results The γ analysis results indicated an acceptable dose distribution agreement of 95% with 3%/3 mm. Although, the γ-maps displayed dose displacement >1 mm around the healthy lungs. Compared to type ‘a’, the OED values from type ‘b’ dose distributions’ were about 8% to 16% higher, leading to an EAR ratio >1, ranged from 1.08 to 1.13 depending on SCR models. Conclusions The shift of dose calculation in radiotherapy, according to the algorithm, can significantly influence the SCR prediction and the plan optimization, since OEDs are calculated from DVH for a specific treatment. The agreement between dose distribution and SCR prediction depends on dose response models and epidemiological data. In addition, the γ passing rates of 3%/3 mm does not translate the difference, up to 15%, in the predictions of SCR resulting from alternative algorithms. Considering that modern algorithms are more accurate, showing more precisely the dose distributions, but that the prediction of absolute SCR is still very imprecise, only the EAR ratio could be used to rank radiotherapy plans. PMID:28811995

  9. The energy-dependent electron loss model: backscattering and application to heterogeneous slab media.

    PubMed

    Lee, Tae Kyu; Sandison, George A

    2003-01-21

    Electron backscattering has been incorporated into the energy-dependent electron loss (EL) model and the resulting algorithm is applied to predict dose deposition in slab heterogeneous media. This algorithm utilizes a reflection coefficient from the interface that is computed on the basis of Goudsmit-Saunderson theory and an average energy for the backscattered electrons based on Everhart's theory. Predictions of dose deposition in slab heterogeneous media are compared to the Monte Carlo based dose planning method (DPM) and a numerical discrete ordinates method (DOM). The slab media studied comprised water/Pb, water/Al, water/bone, water/bone/water, and water/lung/water, and incident electron beam energies of 10 MeV and 18 MeV. The predicted dose enhancement due to backscattering is accurate to within 3% of dose maximum even for lead as the backscattering medium. Dose discrepancies at large depths beyond the interface were as high as 5% of dose maximum and we speculate that this error may be attributed to the EL model assuming a Gaussian energy distribution for the electrons at depth. The computational cost is low compared to Monte Carlo simulations making the EL model attractive as a fast dose engine for dose optimization algorithms. The predictive power of the algorithm demonstrates that the small angle scattering restriction on the EL model can be overcome while retaining dose calculation accuracy and requiring only one free variable, chi, in the algorithm to be determined in advance of calculation.

  10. SU-F-T-444: Quality Improvement Review of Radiation Therapy Treatment Planning in the Presence of Dental Implants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parenica, H; Ford, J; Mavroidis, P

    Purpose: To quantify and compare the effect of metallic dental implants (MDI) on dose distributions calculated using Collapsed Cone Convolution Superposition (CCCS) algorithm or a Monte Carlo algorithm (with and without correcting for the density of the MDI). Methods: Seven previously treated patients to the head and neck region were included in this study. The MDI and the streaking artifacts on the CT images were carefully contoured. For each patient a plan was optimized and calculated using the Pinnacle3 treatment planning system (TPS). For each patient two dose calculations were performed, a) with the densities of the MDI and CTmore » artifacts overridden (12 g/cc and 1 g/cc respectively) and b) without density overrides. The plans were then exported to the Monaco TPS and recalculated using Monte Carlo dose calculation algorithm. The changes in dose to PTVs and surrounding Regions of Interest (ROIs) were examined between all plans. Results: The Monte Carlo dose calculation indicated that PTVs received 6% lower dose than the CCCS algorithm predicted. In some cases, the Monte Carlo algorithm indicated that surrounding ROIs received higher dose (up to a factor of 2). Conclusion: Not properly accounting for dental implants can impact both the high dose regions (PTV) and the low dose regions (OAR). This study implies that if MDI and the artifacts are not appropriately contoured and given the correct density, there is potential significant impact on PTV coverage and OAR maximum doses.« less

  11. The energy-dependent electron loss model: backscattering and application to heterogeneous slab media

    NASA Astrophysics Data System (ADS)

    Lee, Tae Kyu; Sandison, George A.

    2003-01-01

    Electron backscattering has been incorporated into the energy-dependent electron loss (EL) model and the resulting algorithm is applied to predict dose deposition in slab heterogeneous media. This algorithm utilizes a reflection coefficient from the interface that is computed on the basis of Goudsmit-Saunderson theory and an average energy for the backscattered electrons based on Everhart's theory. Predictions of dose deposition in slab heterogeneous media are compared to the Monte Carlo based dose planning method (DPM) and a numerical discrete ordinates method (DOM). The slab media studied comprised water/Pb, water/Al, water/bone, water/bone/water, and water/lung/water, and incident electron beam energies of 10 MeV and 18 MeV. The predicted dose enhancement due to backscattering is accurate to within 3% of dose maximum even for lead as the backscattering medium. Dose discrepancies at large depths beyond the interface were as high as 5% of dose maximum and we speculate that this error may be attributed to the EL model assuming a Gaussian energy distribution for the electrons at depth. The computational cost is low compared to Monte Carlo simulations making the EL model attractive as a fast dose engine for dose optimization algorithms. The predictive power of the algorithm demonstrates that the small angle scattering restriction on the EL model can be overcome while retaining dose calculation accuracy and requiring only one free variable, χ, in the algorithm to be determined in advance of calculation.

  12. Dose specification for radiation therapy: dose to water or dose to medium?

    NASA Astrophysics Data System (ADS)

    Ma, C.-M.; Li, Jinsheng

    2011-05-01

    The Monte Carlo method enables accurate dose calculation for radiation therapy treatment planning and has been implemented in some commercial treatment planning systems. Unlike conventional dose calculation algorithms that provide patient dose information in terms of dose to water with variable electron density, the Monte Carlo method calculates the energy deposition in different media and expresses dose to a medium. This paper discusses the differences in dose calculated using water with different electron densities and that calculated for different biological media and the clinical issues on dose specification including dose prescription and plan evaluation using dose to water and dose to medium. We will demonstrate that conventional photon dose calculation algorithms compute doses similar to those simulated by Monte Carlo using water with different electron densities, which are close (<4% differences) to doses to media but significantly different (up to 11%) from doses to water converted from doses to media following American Association of Physicists in Medicine (AAPM) Task Group 105 recommendations. Our results suggest that for consistency with previous radiation therapy experience Monte Carlo photon algorithms report dose to medium for radiotherapy dose prescription, treatment plan evaluation and treatment outcome analysis.

  13. Independent dose verification system with Monte Carlo simulations using TOPAS for passive scattering proton therapy at the National Cancer Center in Korea

    NASA Astrophysics Data System (ADS)

    Shin, Wook-Geun; Testa, Mauro; Kim, Hak Soo; Jeong, Jong Hwi; Byeong Lee, Se; Kim, Yeon-Joo; Min, Chul Hee

    2017-10-01

    For the independent validation of treatment plans, we developed a fully automated Monte Carlo (MC)-based patient dose calculation system with the tool for particle simulation (TOPAS) and proton therapy machine installed at the National Cancer Center in Korea to enable routine and automatic dose recalculation for each patient. The proton beam nozzle was modeled with TOPAS to simulate the therapeutic beam, and MC commissioning was performed by comparing percent depth dose with the measurement. The beam set-up based on the prescribed beam range and modulation width was automated by modifying the vendor-specific method. The CT phantom was modeled based on the DICOM CT files with TOPAS-built-in function, and an in-house-developed C++ code directly imports the CT files for positioning the CT phantom, RT-plan file for simulating the treatment plan, and RT-structure file for applying the Hounsfield unit (HU) assignment, respectively. The developed system was validated by comparing the dose distributions with those calculated by the treatment planning system (TPS) for a lung phantom and two patient cases of abdomen and internal mammary node. The results of the beam commissioning were in good agreement of up to 0.8 mm2 g-1 for B8 option in both of the beam range and the modulation width of the spread-out Bragg peaks. The beam set-up technique can predict the range and modulation width with an accuracy of 0.06% and 0.51%, respectively, with respect to the prescribed range and modulation in arbitrary points of B5 option (128.3, 132.0, and 141.2 mm2 g-1 of range). The dose distributions showed higher than 99% passing rate for the 3D gamma index (3 mm distance to agreement and 3% dose difference) between the MC simulations and the clinical TPS in the target volume. However, in the normal tissues, less favorable agreements were obtained for the radiation treatment planning with the lung phantom and internal mammary node cases. The discrepancies might come from the limitations of the clinical TPS, which is the inaccurate dose calculation algorithm for the scattering effect, in the range compensator and inhomogeneous material. Moreover, the steep slope of the compensator, conversion of the HU values to the human phantom, and the dose calculation algorithm for the HU assignment also could be reasons of the discrepancies. The current study could be used for the independent dose validation of treatment plans including high inhomogeneities, the steep compensator, and riskiness such as lung, head & neck cases. According to the treatment policy, the dose discrepancies predicted with MC could be used for the acceptance decision of the original treatment plan.

  14. Predicting the evolution of spreading on complex networks

    PubMed Central

    Chen, Duan-Bing; Xiao, Rui; Zeng, An

    2014-01-01

    Due to the wide applications, spreading processes on complex networks have been intensively studied. However, one of the most fundamental problems has not yet been well addressed: predicting the evolution of spreading based on a given snapshot of the propagation on networks. With this problem solved, one can accelerate or slow down the spreading in advance if the predicted propagation result is narrower or wider than expected. In this paper, we propose an iterative algorithm to estimate the infection probability of the spreading process and then apply it to a mean-field approach to predict the spreading coverage. The validation of the method is performed in both artificial and real networks. The results show that our method is accurate in both infection probability estimation and spreading coverage prediction. PMID:25130862

  15. Evaluation of the Eclipse eMC algorithm for bolus electron conformal therapy using a standard verification dataset.

    PubMed

    Carver, Robert L; Sprunger, Conrad P; Hogstrom, Kenneth R; Popple, Richard A; Antolak, John A

    2016-05-08

    The purpose of this study was to evaluate the accuracy and calculation speed of electron dose distributions calculated by the Eclipse electron Monte Carlo (eMC) algorithm for use with bolus electron conformal therapy (ECT). The recent com-mercial availability of bolus ECT technology requires further validation of the eMC dose calculation algorithm. eMC-calculated electron dose distributions for bolus ECT have been compared to previously measured TLD-dose points throughout patient-based cylindrical phantoms (retromolar trigone and nose), whose axial cross sections were based on the mid-PTV (planning treatment volume) CT anatomy. The phantoms consisted of SR4 muscle substitute, SR4 bone substitute, and air. The treatment plans were imported into the Eclipse treatment planning system, and electron dose distributions calculated using 1% and < 0.2% statistical uncertainties. The accuracy of the dose calculations using moderate smoothing and no smooth-ing were evaluated. Dose differences (eMC-calculated less measured dose) were evaluated in terms of absolute dose difference, where 100% equals the given dose, as well as distance to agreement (DTA). Dose calculations were also evaluated for calculation speed. Results from the eMC for the retromolar trigone phantom using 1% statistical uncertainty without smoothing showed calculated dose at 89% (41/46) of the measured TLD-dose points was within 3% dose difference or 3 mm DTA of the measured value. The average dose difference was -0.21%, and the net standard deviation was 2.32%. Differences as large as 3.7% occurred immediately distal to the mandible bone. Results for the nose phantom, using 1% statistical uncertainty without smoothing, showed calculated dose at 93% (53/57) of the measured TLD-dose points within 3% dose difference or 3 mm DTA. The average dose difference was 1.08%, and the net standard deviation was 3.17%. Differences as large as 10% occurred lateral to the nasal air cavities. Including smoothing had insignificant effects on the accuracy of the retromolar trigone phantom calculations, but reduced the accuracy of the nose phantom calculations in the high-gradient dose areas. Dose calculation times with 1% statistical uncertainty for the retromolar trigone and nose treatment plans were 30 s and 24 s, respectively, using 16 processors (Intel Xeon E5-2690, 2.9 GHz) on a framework agent server (FAS). In comparison, the eMC was significantly more accurate than the pencil beam algorithm (PBA). The eMC has comparable accuracy to the pencil beam redefinition algorithm (PBRA) used for bolus ECT planning and has acceptably low dose calculation times. The eMC accuracy decreased when smoothing was used in high-gradient dose regions. The eMC accuracy was consistent with that previously reported for accuracy of the eMC electron dose algorithm and shows that the algorithm is suitable for clinical implementation of bolus ECT.

  16. An in vivo evaluation of the antiseizure activity and acute neurotoxicity of agmatine.

    PubMed

    Bence, Aimee K; Worthen, David R; Stables, James P; Crooks, Peter A

    2003-02-01

    Agmatine, an endogenous cationic amine, exerts a wide range of biological effects, including modulation of glutamate-activated N-methyl-D-aspartate (NMDA) receptor function in the central nervous system (CNS). Since glutamate and the NMDA receptor have been implicated in the initiation and spread of seizure activity, the capacity of agmatine to inhibit seizure spread was evaluated in vivo. Orally administered agmatine (30 mg/kg) protected against maximal electroshock seizure (MES)-induced seizure spread in rats as rapidly as 15 min and for as long as 6 h after administration. Inhibition of MES-induced seizure spread was also observed when agmatine was administered intraperitoneally. Agmatine's antiseizure activity did not appear to be dose-dependent. An in vivo neurotoxicity screen indicated that agmatine was devoid of any acute neurological toxicity at the doses tested. These preliminary data suggest that agmatine has promising anticonvulsant activity.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thrower, Sara L., E-mail: slloupot@mdanderson.org; Shaitelman, Simona F.; Bloom, Elizabeth

    Purpose: To compare the treatment plans for accelerated partial breast irradiation calculated by the new commercially available collapsed cone convolution (CCC) and current standard TG-43–based algorithms for 50 patients treated at our institution with either a Strut-Adjusted Volume Implant (SAVI) or Contura device. Methods and Materials: We recalculated target coverage, volume of highly dosed normal tissue, and dose to organs at risk (ribs, skin, and lung) with each algorithm. For 1 case an artificial air pocket was added to simulate 10% nonconformance. We performed a Wilcoxon signed rank test to determine the median differences in the clinical indices V90, V95, V100,more » V150, V200, and highest-dosed 0.1 cm{sup 3} and 1.0 cm{sup 3} of rib, skin, and lung between the two algorithms. Results: The CCC algorithm calculated lower values on average for all dose-volume histogram parameters. Across the entire patient cohort, the median difference in the clinical indices calculated by the 2 algorithms was <10% for dose to organs at risk, <5% for target volume coverage (V90, V95, and V100), and <4 cm{sup 3} for dose to normal breast tissue (V150 and V200). No discernable difference was seen in the nonconformance case. Conclusions: We found that on average over our patient population CCC calculated (<10%) lower doses than TG-43. These results should inform clinicians as they prepare for the transition to heterogeneous dose calculation algorithms and determine whether clinical tolerance limits warrant modification.« less

  18. A Novel Admixture-Based Pharmacogenetic Approach to Refine Warfarin Dosing in Caribbean Hispanics

    PubMed Central

    Claudio-Campos, Karla; Rivera-Miranda, Giselle; Bermúdez-Bosch, Luis; Renta, Jessicca Y.; Cadilla, Carmen L.; Cruz, Iadelisse; Feliu, Juan F.; Vergara, Cunegundo; Ruaño, Gualberto

    2016-01-01

    Aim This study is aimed at developing a novel admixture-adjusted pharmacogenomic approach to individually refine warfarin dosing in Caribbean Hispanic patients. Patients & Methods A multiple linear regression analysis of effective warfarin doses versus relevant genotypes, admixture, clinical and demographic factors was performed in 255 patients and further validated externally in another cohort of 55 individuals. Results The admixture-adjusted, genotype-guided warfarin dosing refinement algorithm developed in Caribbean Hispanics showed better predictability (R2 = 0.70, MAE = 0.72mg/day) than a clinical algorithm that excluded genotypes and admixture (R2 = 0.60, MAE = 0.99mg/day), and outperformed two prior pharmacogenetic algorithms in predicting effective dose in this population. For patients at the highest risk of adverse events, 45.5% of the dose predictions using the developed pharmacogenetic model resulted in ideal dose as compared with only 29% when using the clinical non-genetic algorithm (p<0.001). The admixture-driven pharmacogenetic algorithm predicted 58% of warfarin dose variance when externally validated in 55 individuals from an independent validation cohort (MAE = 0.89 mg/day, 24% mean bias). Conclusions Results supported our rationale to incorporate individual’s genotypes and unique admixture metrics into pharmacogenetic refinement models in order to increase predictability when expanding them to admixed populations like Caribbean Hispanics. Trial Registration ClinicalTrials.gov NCT01318057 PMID:26745506

  19. WE-AB-207B-05: Correlation of Normal Lung Density Changes with Dose After Stereotactic Body Radiotherapy (SBRT) for Early Stage Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Q; Devpura, S; Feghali, K

    2016-06-15

    Purpose: To investigate correlation of normal lung CT density changes with dose accuracy and outcome after SBRT for patients with early stage lung cancer. Methods: Dose distributions for patients originally planned and treated using a 1-D pencil beam-based (PB-1D) dose algorithm were retrospectively recomputed using algorithms: 3-D pencil beam (PB-3D), and model-based Methods: AAA, Acuros XB (AXB), and Monte Carlo (MC). Prescription dose was 12 Gy × 4 fractions. Planning CT images were rigidly registered to the followup CT datasets at 6–9 months after treatment. Corresponding dose distributions were mapped from the planning to followup CT images. Following the methodmore » of Palma et al .(1–2), Hounsfield Unit (HU) changes in lung density in individual, 5 Gy, dose bins from 5–45 Gy were assessed in the peri-tumor region, defined as a uniform, 3 cm expansion around the ITV(1). Results: There is a 10–15% displacement of the high dose region (40–45 Gy) with the model-based algorithms, relative to the PB method, due to the electron scattering of dose away from the tumor into normal lung tissue (Fig.1). Consequently, the high-dose lung region falls within the 40–45 Gy dose range, causing an increase in HU change in this region, as predicted by model-based algorithms (Fig.2). The patient with the highest HU change (∼110) had mild radiation pneumonitis, and the patient with HU change of ∼80–90 had shortness of breath. No evidence of pneumonitis was observed for the 3 patients with smaller CT density changes (<50 HU). Changes in CT densities, and dose-response correlation, as computed with model-based algorithms, are in excellent agreement with the findings of Palma et al. (1–2). Conclusion: Dose computed with PB (1D or 3D) algorithms was poorly correlated with clinically relevant CT density changes, as opposed to model-based algorithms. A larger cohort of patients is needed to confirm these results. This work was supported in part by a grant from Varian Medical Systems, Palo Alto, CA.« less

  20. Superficial dose evaluation of four dose calculation algorithms

    NASA Astrophysics Data System (ADS)

    Cao, Ying; Yang, Xiaoyu; Yang, Zhen; Qiu, Xiaoping; Lv, Zhiping; Lei, Mingjun; Liu, Gui; Zhang, Zijian; Hu, Yongmei

    2017-08-01

    Accurate superficial dose calculation is of major importance because of the skin toxicity in radiotherapy, especially within the initial 2 mm depth being considered more clinically relevant. The aim of this study is to evaluate superficial dose calculation accuracy of four commonly used algorithms in commercially available treatment planning systems (TPS) by Monte Carlo (MC) simulation and film measurements. The superficial dose in a simple geometrical phantom with size of 30 cm×30 cm×30 cm was calculated by PBC (Pencil Beam Convolution), AAA (Analytical Anisotropic Algorithm), AXB (Acuros XB) in Eclipse system and CCC (Collapsed Cone Convolution) in Raystation system under the conditions of source to surface distance (SSD) of 100 cm and field size (FS) of 10×10 cm2. EGSnrc (BEAMnrc/DOSXYZnrc) program was performed to simulate the central axis dose distribution of Varian Trilogy accelerator, combined with measurements of superficial dose distribution by an extrapolation method of multilayer radiochromic films, to estimate the dose calculation accuracy of four algorithms in the superficial region which was recommended in detail by the ICRU (International Commission on Radiation Units and Measurement) and the ICRP (International Commission on Radiological Protection). In superficial region, good agreement was achieved between MC simulation and film extrapolation method, with the mean differences less than 1%, 2% and 5% for 0°, 30° and 60°, respectively. The relative skin dose errors were 0.84%, 1.88% and 3.90%; the mean dose discrepancies (0°, 30° and 60°) between each of four algorithms and MC simulation were (2.41±1.55%, 3.11±2.40%, and 1.53±1.05%), (3.09±3.00%, 3.10±3.01%, and 3.77±3.59%), (3.16±1.50%, 8.70±2.84%, and 18.20±4.10%) and (14.45±4.66%, 10.74±4.54%, and 3.34±3.26%) for AXB, CCC, AAA and PBC respectively. Monte Carlo simulation verified the feasibility of the superficial dose measurements by multilayer Gafchromic films. And the rank of superficial dose calculation accuracy of four algorithms was AXB>CCC>AAA>PBC. Care should be taken when using the AAA and PBC algorithms in the superficial dose calculation.

  1. Accuracy assessment of pharmacogenetically predictive warfarin dosing algorithms in patients of an academic medical center anticoagulation clinic.

    PubMed

    Shaw, Paul B; Donovan, Jennifer L; Tran, Maichi T; Lemon, Stephenie C; Burgwinkle, Pamela; Gore, Joel

    2010-08-01

    The objectives of this retrospective cohort study are to evaluate the accuracy of pharmacogenetic warfarin dosing algorithms in predicting therapeutic dose and to determine if this degree of accuracy warrants the routine use of genotyping to prospectively dose patients newly started on warfarin. Seventy-one patients of an outpatient anticoagulation clinic at an academic medical center who were age 18 years or older on a stable, therapeutic warfarin dose with international normalized ratio (INR) goal between 2.0 and 3.0, and cytochrome P450 isoenzyme 2C9 (CYP2C9) and vitamin K epoxide reductase complex subunit 1 (VKORC1) genotypes available between January 1, 2007 and September 30, 2008 were included. Six pharmacogenetic warfarin dosing algorithms were identified from the medical literature. Additionally, a 5 mg fixed dose approach was evaluated. Three algorithms, Zhu et al. (Clin Chem 53:1199-1205, 2007), Gage et al. (J Clin Ther 84:326-331, 2008), and International Warfarin Pharmacogenetic Consortium (IWPC) (N Engl J Med 360:753-764, 2009) were similar in the primary accuracy endpoints with mean absolute error (MAE) ranging from 1.7 to 1.8 mg/day and coefficient of determination R (2) from 0.61 to 0.66. However, the Zhu et al. algorithm severely over-predicted dose (defined as >or=2x or >or=2 mg/day more than actual dose) in twice as many (14 vs. 7%) patients as Gage et al. 2008 and IWPC 2009. In conclusion, the algorithms published by Gage et al. 2008 and the IWPC 2009 were the two most accurate pharmacogenetically based equations available in the medical literature in predicting therapeutic warfarin dose in our study population. However, the degree of accuracy demonstrated does not support the routine use of genotyping to prospectively dose all patients newly started on warfarin.

  2. TU-A-12A-07: CT-Based Biomarkers to Characterize Lung Lesion: Effects of CT Dose, Slice Thickness and Reconstruction Algorithm Based Upon a Phantom Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, B; Tan, Y; Tsai, W

    2014-06-15

    Purpose: Radiogenomics promises the ability to study cancer tumor genotype from the phenotype obtained through radiographic imaging. However, little attention has been paid to the sensitivity of image features, the image-based biomarkers, to imaging acquisition techniques. This study explores the impact of CT dose, slice thickness and reconstruction algorithm on measuring image features using a thorax phantom. Methods: Twentyfour phantom lesions of known volume (1 and 2mm), shape (spherical, elliptical, lobular and spicular) and density (-630, -10 and +100 HU) were scanned on a GE VCT at four doses (25, 50, 100, and 200 mAs). For each scan, six imagemore » series were reconstructed at three slice thicknesses of 5, 2.5 and 1.25mm with continuous intervals, using the lung and standard reconstruction algorithms. The lesions were segmented with an in-house 3D algorithm. Fifty (50) image features representing lesion size, shape, edge, and density distribution/texture were computed. Regression method was employed to analyze the effect of CT dose, slice of thickness and reconstruction algorithm on these features adjusting 3 confounding factors (size, density and shape of phantom lesions). Results: The coefficients of CT dose, slice thickness and reconstruction algorithm are presented in Table 1 in the supplementary material. No significant difference was found between the image features calculated on low dose CT scans (25mAs and 50mAs). About 50% texture features were found statistically different between low doses and high doses (100 and 200mAs). Significant differences were found for almost all features when calculated on 1.25mm, 2.5mm, and 5mm slice thickness images. Reconstruction algorithms significantly affected all density-based image features, but not morphological features. Conclusions: There is a great need to standardize the CT imaging protocols for radiogenomics study because CT dose, slice thickness and reconstruction algorithm impact quantitative image features to various degrees as our study has shown.« less

  3. Design of Genetic Algorithms for Topology Control of Unmanned Vehicles

    DTIC Science & Technology

    2010-01-01

    decentralised topology control mechanism distributed among active running software agents to achieve a uniform spread of terrestrial unmanned vehicles...14. ABSTRACT We present genetic algorithms (GAs) as a decentralised topology control mechanism distributed among active running software agents to...inspired topology control algorithm. The topology control of UVs using a decentralised solution over an unknown geographical terrain is a challenging

  4. Pilot validation of an individualised pharmacokinetic algorithm for protamine dosing after systemic heparinisation for cardiopulmonary bypass.

    PubMed

    Miles, Lachlan F; Marchiori, Paolo; Falter, Florian

    2017-09-01

    This manuscript represents a pilot study assessing the feasibility of a single-compartment, individualised, pharmacokinetic algorithm for protamine dosing after cardiopulmonary bypass. A pilot cohort study in a specialist NHS cardiothoracic hospital targeting patients undergoing elective cardiac surgery using cardiopulmonary bypass. Patients received protamine doses according to a pharmacokinetic algorithm (n = 30) or using an empirical, fixed-dose model (n = 30). Categorical differences between the groups were evaluated using the Chi-squared test or Fisher's exact test. Continuous data was analysed using a paired Student's t-test for parametric data and the paired samples Wilcoxon test for non-parametric data. Patients who had protamine dosing according to the algorithm demonstrated a lower protamine requirement post-bypass relative to empirical management as measured by absolute dose (243 ± 49mg vs. 305 ± 34.7mg; p<0.001) and the heparin to protamine ratio (0.79 ± 0.12 vs. 1.1 ± 0.15; p<0.001). There was no difference in the pre- to post-bypass activated clotting time (ACT) ratio (1.05 ± 0.12 vs. 1.02 ± 0.15; p=0.9). Patients who received protamine according to the algorithm had no significant difference in transfusion requirement (13.3% vs. 30.0%; p=0.21). This study showed that an individualized pharmacokinetic algorithm for the reversal of heparin after cardiopulmonary bypass is feasible in comparison with a fixed dosing strategy and may reduce the protamine requirement following on-pump cardiac surgery.

  5. An Accurate Fire-Spread Algorithm in the Weather Research and Forecasting Model Using the Level-Set Method

    NASA Astrophysics Data System (ADS)

    Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.

    2018-04-01

    The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.

  6. Comparison among Reconstruction Algorithms for Quantitative Analysis of 11C-Acetate Cardiac PET Imaging.

    PubMed

    Shi, Ximin; Li, Nan; Ding, Haiyan; Dang, Yonghong; Hu, Guilan; Liu, Shuai; Cui, Jie; Zhang, Yue; Li, Fang; Zhang, Hui; Huo, Li

    2018-01-01

    Kinetic modeling of dynamic 11 C-acetate PET imaging provides quantitative information for myocardium assessment. The quality and quantitation of PET images are known to be dependent on PET reconstruction methods. This study aims to investigate the impacts of reconstruction algorithms on the quantitative analysis of dynamic 11 C-acetate cardiac PET imaging. Suspected alcoholic cardiomyopathy patients ( N = 24) underwent 11 C-acetate dynamic PET imaging after low dose CT scan. PET images were reconstructed using four algorithms: filtered backprojection (FBP), ordered subsets expectation maximization (OSEM), OSEM with time-of-flight (TOF), and OSEM with both time-of-flight and point-spread-function (TPSF). Standardized uptake values (SUVs) at different time points were compared among images reconstructed using the four algorithms. Time-activity curves (TACs) in myocardium and blood pools of ventricles were generated from the dynamic image series. Kinetic parameters K 1 and k 2 were derived using a 1-tissue-compartment model for kinetic modeling of cardiac flow from 11 C-acetate PET images. Significant image quality improvement was found in the images reconstructed using iterative OSEM-type algorithms (OSME, TOF, and TPSF) compared with FBP. However, no statistical differences in SUVs were observed among the four reconstruction methods at the selected time points. Kinetic parameters K 1 and k 2 also exhibited no statistical difference among the four reconstruction algorithms in terms of mean value and standard deviation. However, for the correlation analysis, OSEM reconstruction presented relatively higher residual in correlation with FBP reconstruction compared with TOF and TPSF reconstruction, and TOF and TPSF reconstruction were highly correlated with each other. All the tested reconstruction algorithms performed similarly for quantitative analysis of 11 C-acetate cardiac PET imaging. TOF and TPSF yielded highly consistent kinetic parameter results with superior image quality compared with FBP. OSEM was relatively less reliable. Both TOF and TPSF were recommended for cardiac 11 C-acetate kinetic analysis.

  7. SU-FF-T-668: A Simple Algorithm for Range Modulation Wheel Design in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nie, X; Nazaryan, Vahagn; Gueye, Paul

    2009-06-01

    Purpose: To develop a simple algorithm in designing the range modulation wheel to generate a very smooth Spread-Out Bragg peak (SOBP) for proton therapy.Method and Materials: A simple algorithm has been developed to generate the weight factors in corresponding pristine Bragg peaks which composed a smooth SOBP in proton therapy. We used a modified analytical Bragg peak function based on Monte Carol simulation tool-kits of Geant4 as pristine Bragg peaks input in our algorithm. A simple METLAB(R) Quad Program was introduced to optimize the cost function in our algorithm. Results: We found out that the existed analytical function of Braggmore » peak can't directly use as pristine Bragg peak dose-depth profile input file in optimization of the weight factors since this model didn't take into account of the scattering factors introducing from the range shifts in modifying the proton beam energies. We have done Geant4 simulations for proton energy of 63.4 MeV with a 1.08 cm SOBP for variation of pristine Bragg peaks which composed this SOBP and modified the existed analytical Bragg peak functions for their peak heights, ranges of R{sub 0}, and Gaussian energies {sigma}{sub E}. We found out that 19 pristine Bragg peaks are enough to achieve a flatness of 1.5% of SOBP which is the best flatness in the publications. Conclusion: This work develops a simple algorithm to generate the weight factors which is used to design a range modulation wheel to generate a smooth SOBP in protonradiation therapy. We have found out that a medium number of pristine Bragg peaks are enough to generate a SOBP with flatness less than 2%. It is potential to generate data base to store in the treatment plan to produce a clinic acceptable SOBP by using our simple algorithm.« less

  8. Continuous energy adjoint transport for photons in PHITS

    NASA Astrophysics Data System (ADS)

    Malins, Alex; Machida, Masahiko; Niita, Koji

    2017-09-01

    Adjoint Monte Carlo can be an effcient algorithm for solving photon transport problems where the size of the tally is relatively small compared to the source. Such problems are typical in environmental radioactivity calculations, where natural or fallout radionuclides spread over a large area contribute to the air dose rate at a particular location. Moreover photon transport with continuous energy representation is vital for accurately calculating radiation protection quantities. Here we describe the incorporation of an adjoint Monte Carlo capability for continuous energy photon transport into the Particle and Heavy Ion Transport code System (PHITS). An adjoint cross section library for photon interactions was developed based on the JENDL- 4.0 library, by adding cross sections for adjoint incoherent scattering and pair production. PHITS reads in the library and implements the adjoint transport algorithm by Hoogenboom. Adjoint pseudo-photons are spawned within the forward tally volume and transported through space. Currently pseudo-photons can undergo coherent and incoherent scattering within the PHITS adjoint function. Photoelectric absorption is treated implicitly. The calculation result is recovered from the pseudo-photon flux calculated over the true source volume. A new adjoint tally function facilitates this conversion. This paper gives an overview of the new function and discusses potential future developments.

  9. A comparison of the convolution and TMR10 treatment planning algorithms for Gamma Knife® radiosurgery

    PubMed Central

    Wright, Gavin; Harrold, Natalie; Bownes, Peter

    2018-01-01

    Aims To compare the accuracies of the convolution and TMR10 Gamma Knife treatment planning algorithms, and assess the impact upon clinical practice of implementing convolution-based treatment planning. Methods Doses calculated by both algorithms were compared against ionisation chamber measurements in homogeneous and heterogeneous phantoms. Relative dose distributions calculated by both algorithms were compared against film-derived 2D isodose plots in a heterogeneous phantom, with distance-to-agreement (DTA) measured at the 80%, 50% and 20% isodose levels. A retrospective planning study compared 19 clinically acceptable metastasis convolution plans against TMR10 plans with matched shot times, allowing novel comparison of true dosimetric parameters rather than total beam-on-time. Gamma analysis and dose-difference analysis were performed on each pair of dose distributions. Results Both algorithms matched point dose measurement within ±1.1% in homogeneous conditions. Convolution provided superior point-dose accuracy in the heterogeneous phantom (-1.1% v 4.0%), with no discernible differences in relative dose distribution accuracy. In our study convolution-calculated plans yielded D99% 6.4% (95% CI:5.5%-7.3%,p<0.001) less than shot matched TMR10 plans. For gamma passing criteria 1%/1mm, 16% of targets had passing rates >95%. The range of dose differences in the targets was 0.2-4.6Gy. Conclusions Convolution provides superior accuracy versus TMR10 in heterogeneous conditions. Implementing convolution would result in increased target doses therefore its implementation may require a revaluation of prescription doses. PMID:29657896

  10. Design of a TDOA location engine and development of a location system based on chirp spread spectrum.

    PubMed

    Wang, Rui-Rong; Yu, Xiao-Qing; Zheng, Shu-Wang; Ye, Yang

    2016-01-01

    Location based services (LBS) provided by wireless sensor networks have garnered a great deal of attention from researchers and developers in recent years. Chirp spread spectrum (CSS) signaling formatting with time difference of arrival (TDOA) ranging technology is an effective LBS technique in regards to positioning accuracy, cost, and power consumption. The design and implementation of the location engine and location management based on TDOA location algorithms were the focus of this study; as the core of the system, the location engine was designed as a series of location algorithms and smoothing algorithms. To enhance the location accuracy, a Kalman filter algorithm and moving weighted average technique were respectively applied to smooth the TDOA range measurements and location results, which are calculated by the cooperation of a Kalman TDOA algorithm and a Taylor TDOA algorithm. The location management server, the information center of the system, was designed with Data Server and Mclient. To evaluate the performance of the location algorithms and the stability of the system software, we used a Nanotron nanoLOC Development Kit 3.0 to conduct indoor and outdoor location experiments. The results indicated that the location system runs stably with high accuracy at absolute error below 0.6 m.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paudel, M R; Beachey, D J; Sarfehnia, A

    Purpose: A new commercial GPU-based Monte Carlo dose calculation algorithm (GPUMCD) developed by the vendor Elekta™ to be used in the Monaco Treatment Planning System (TPS) is capable of modeling dose for both a standard linear accelerator and for an Elekta MRI-Linear accelerator (modeling magnetic field effects). We are evaluating this algorithm in two parts: commissioning the algorithm for an Elekta Agility linear accelerator (the focus of this work) and evaluating the algorithm’s ability to model magnetic field effects for an MRI-linear accelerator. Methods: A beam model was developed in the Monaco TPS (v.5.09.06) using the commissioned beam data formore » a 6MV Agility linac. A heterogeneous phantom representing tumor-in-lung, lung, bone-in-tissue, and prosthetic was designed/built. Dose calculations in Monaco were done using the current clinical algorithm (XVMC) and the new GPUMCD algorithm (1 mm3 voxel size, 0.5% statistical uncertainty) and in the Pinnacle TPS using the collapsed cone convolution (CCC) algorithm. These were compared with the measured doses using an ionization chamber (A1SL) and Gafchromic EBT3 films for 2×2 cm{sup 2}, 5×5 cm{sup 2}, and 10×10 cm{sup 2} field sizes. Results: The calculated central axis percentage depth doses (PDDs) in homogeneous solid water were within 2% compared to measurements for XVMC and GPUMCD. For tumor-in-lung and lung phantoms, doses calculated by all of the algorithms were within the experimental uncertainty of the measurements (±2% in the homogeneous phantom and ±3% for the tumor-in-lung or lung phantoms), except for 2×2 cm{sup 2} field size where only the CCC algorithm differs from film by 5% in the lung region. The analysis for bone-in-tissue and the prosthetic phantoms are ongoing. Conclusion: The new GPUMCD algorithm calculated dose comparable to both the XVMC algorithm and to measurements in both a homogeneous solid water medium and the heterogeneous phantom representing lung or tumor-in-lung for 2×2 cm{sup 2}-10×10 cm{sup 2} field sizes. Funding support was obtained from Elekta.« less

  12. Speed and convergence properties of gradient algorithms for optimization of IMRT.

    PubMed

    Zhang, Xiaodong; Liu, Helen; Wang, Xiaochun; Dong, Lei; Wu, Qiuwen; Mohan, Radhe

    2004-05-01

    Gradient algorithms are the most commonly employed search methods in the routine optimization of IMRT plans. It is well known that local minima can exist for dose-volume-based and biology-based objective functions. The purpose of this paper is to compare the relative speed of different gradient algorithms, to investigate the strategies for accelerating the optimization process, to assess the validity of these strategies, and to study the convergence properties of these algorithms for dose-volume and biological objective functions. With these aims in mind, we implemented Newton's, conjugate gradient (CG), and the steepest decent (SD) algorithms for dose-volume- and EUD-based objective functions. Our implementation of Newton's algorithm approximates the second derivative matrix (Hessian) by its diagonal. The standard SD algorithm and the CG algorithm with "line minimization" were also implemented. In addition, we investigated the use of a variation of the CG algorithm, called the "scaled conjugate gradient" (SCG) algorithm. To accelerate the optimization process, we investigated the validity of the use of a "hybrid optimization" strategy, in which approximations to calculated dose distributions are used during most of the iterations. Published studies have indicated that getting trapped in local minima is not a significant problem. To investigate this issue further, we first obtained, by trial and error, and starting with uniform intensity distributions, the parameters of the dose-volume- or EUD-based objective functions which produced IMRT plans that satisfied the clinical requirements. Using the resulting optimized intensity distributions as the initial guess, we investigated the possibility of getting trapped in a local minimum. For most of the results presented, we used a lung cancer case. To illustrate the generality of our methods, the results for a prostate case are also presented. For both dose-volume and EUD based objective functions, Newton's method far outperforms other algorithms in terms of speed. The SCG algorithm, which avoids expensive "line minimization," can speed up the standard CG algorithm by at least a factor of 2. For the same initial conditions, all algorithms converge essentially to the same plan. However, we demonstrate that for any of the algorithms studied, starting with previously optimized intensity distributions as the initial guess but for different objective function parameters, the solution frequently gets trapped in local minima. We found that the initial intensity distribution obtained from IMRT optimization utilizing objective function parameters, which favor a specific anatomic structure, would lead to a local minimum corresponding to that structure. Our results indicate that from among the gradient algorithms tested, Newton's method appears to be the fastest by far. Different gradient algorithms have the same convergence properties for dose-volume- and EUD-based objective functions. The hybrid dose calculation strategy is valid and can significantly accelerate the optimization process. The degree of acceleration achieved depends on the type of optimization problem being addressed (e.g., IMRT optimization, intensity modulated beam configuration optimization, or objective function parameter optimization). Under special conditions, gradient algorithms will get trapped in local minima, and reoptimization, starting with the results of previous optimization, will lead to solutions that are generally not significantly different from the local minimum.

  13. Effects of interleukin-1ß on cortical spreading depolarization and cerebral vasculature

    PubMed Central

    Eitner, Annett; Leuchtweis, Johannes; Bauer, Reinhard; Lehmenkühler, Alfred; Schaible, Hans-Georg

    2016-01-01

    During brain damage and ischemia, the cytokine interleukin-1ß is rapidly upregulated due to activation of inflammasomes. We studied whether interleukin-1ß influences cortical spreading depolarization, and whether lipopolysaccharide, often used for microglial stimulation, influences cortical spreading depolarizations. In anaesthetized rats, cortical spreading depolarizations were elicited by microinjection of KCl. Interleukin-1ß, the IL-1 receptor 1 antagonist, the GABAA receptor blocker bicuculline, and lipopolysaccharide were administered either alone or combined (interleukin-1ß + IL-1 receptor 1 antagonist; interleukin-1ß + bicuculline; lipopolysaccharide + IL-1 receptor 1 antagonist) into a local cortical treatment area. Using microelectrodes, cortical spreading depolarizations were recorded in a non-treatment and in the treatment area. Plasma extravasation in cortical grey matter was assessed with Evans blue. Local application of interleukin-1ß reduced cortical spreading depolarization amplitudes in the treatment area, but not at a high dose. This reduction was prevented by IL-1 receptor 1 antagonist and by bicuculline. However, interleukin-1ß induced pronounced plasma extravasation independently on cortical spreading depolarizations. Application of lipopolysaccharide reduced cortical spreading depolarization amplitudes but prolonged their duration; EEG activity was still present. These effects were also blocked by IL-1 receptor 1 antagonist. Interleukin-1ß evokes changes of neuronal activity and of vascular functions. Thus, although the reduction of cortical spreading depolarization amplitudes at lower doses of interleukin-1ß may reduce deleterious effects of cortical spreading depolarizations, the sum of interleukin-1ß effects on excitability and on the vasculature rather promote brain damaging mechanisms. PMID:27037093

  14. SU-E-T-374: Evaluation and Verification of Dose Calculation Accuracy with Different Dose Grid Sizes for Intracranial Stereotactic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, C; Schultheiss, T

    Purpose: In this study, we aim to evaluate the effect of dose grid size on the accuracy of calculated dose for small lesions in intracranial stereotactic radiosurgery (SRS), and to verify dose calculation accuracy with radiochromic film dosimetry. Methods: 15 intracranial lesions from previous SRS patients were retrospectively selected for this study. The planning target volume (PTV) ranged from 0.17 to 2.3 cm{sup 3}. A commercial treatment planning system was used to generate SRS plans using the volumetric modulated arc therapy (VMAT) technique using two arc fields. Two convolution-superposition-based dose calculation algorithms (Anisotropic Analytical Algorithm and Acuros XB algorithm) weremore » used to calculate volume dose distribution with dose grid size ranging from 1 mm to 3 mm with 0.5 mm step size. First, while the plan monitor units (MU) were kept constant, PTV dose variations were analyzed. Second, with 95% of the PTV covered by the prescription dose, variations of the plan MUs as a function of dose grid size were analyzed. Radiochomic films were used to compare the delivered dose and profile with the calculated dose distribution with different dose grid sizes. Results: The dose to the PTV, in terms of the mean dose, maximum, and minimum dose, showed steady decrease with increasing dose grid size using both algorithms. With 95% of the PTV covered by the prescription dose, the total MU increased with increasing dose grid size in most of the plans. Radiochromic film measurements showed better agreement with dose distributions calculated with 1-mm dose grid size. Conclusion: Dose grid size has significant impact on calculated dose distribution in intracranial SRS treatment planning with small target volumes. Using the default dose grid size could lead to under-estimation of delivered dose. A small dose grid size should be used to ensure calculation accuracy and agreement with QA measurements.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badkul, R; Nicolai, W; Pokhrel, D

    Purpose: To compare the impact of Pencil Beam(PB) and Anisotropic Analytic Algorithm(AAA) dose calculation algorithms on OARs and planning target volume (PTV) in thoracic spine stereotactic body radiation therapy (SBRT). Methods: Ten Spine SBRT patients were planned on Brainlab iPlan system using hybrid plan consisting of 1–2 non-coplanar conformal-dynamic arcs and few IMRT beams treated on NovalisTx with 6MV photon. Dose prescription varied from 20Gy to 30Gy in 5 fractions depending on the situation of the patient. PB plans were retrospectively recalculated using the Varian Eclipse with AAA algorithm using same MUs, MLC pattern and grid size(3mm).Differences in dose volumemore » parameters for PTV, spinal cord, lung, and esophagus were analyzed and compared for PB and AAA algorithms. OAR constrains were followed per RTOG-0631. Results: Since patients were treated using PB calculation, we compared all the AAA DVH values with respect to PB plan values as standard, although AAA predicts the dose more accurately than PB. PTV(min), PTV(Max), PTV(mean), PTV(D99%), PTV(D90%) were overestimated with AAA calculation on average by 3.5%, 1.84%, 0.95%, 3.98% and 1.55% respectively as compared to PB. All lung DVH parameters were underestimated with AAA algorithm mean deviation of lung V20, V10, V5, and 1000cc were 42.81%,19.83%, 18.79%, and 18.35% respectively. AAA overestimated Cord(0.35cc) by mean of 17.3%; cord (0.03cc) by 12.19% and cord(max) by 10.5% as compared to PB. Esophagus max dose were overestimated by 4.4% and 5cc by 3.26% for AAA algorithm as compared to PB. Conclusion: AAA overestimated the PTV dose values by up to 4%.The lung DVH had the greatest underestimation of dose by AAA versus PB. Spinal cord dose was overestimated by AAA versus PB. Given the critical importance of accuracy of OAR and PTV dose calculation for SBRT spine, more accurate algorithms and validation of calculated doses in phantom models are indicated.« less

  16. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    PubMed

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low-contrast microcalcifications, the FBP reduced detectability due to its increased noise. The EM algorithm yielded high conspicuity for both microcalcifications and masses and yielded better ASFs in terms of the full width at half maximum. The higher contrast and lower homogeneity in terms of texture analysis were shown in FBP algorithm than in other algorithms. The patient images using the EM algorithm resulted in high visibility of low-contrast mass with clear border. In this study, we compared three reconstruction algorithms by using various kinds of breast phantoms and patient cases. Future work using these algorithms and considering the type of the breast and the acquisition techniques used (e.g., angular range, dose distribution) should include the use of actual patients or patient-like phantoms to increase the potential for practical applications.

  17. Algorithms For Integrating Nonlinear Differential Equations

    NASA Technical Reports Server (NTRS)

    Freed, A. D.; Walker, K. P.

    1994-01-01

    Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.

  18. Comparison of dosimetric and radiobiological parameters on plans for prostate stereotactic body radiotherapy using an endorectal balloon for different dose-calculation algorithms and delivery-beam modes

    NASA Astrophysics Data System (ADS)

    Kang, Sang-Won; Suh, Tae-Suk; Chung, Jin-Beom; Eom, Keun-Yong; Song, Changhoon; Kim, In-Ah; Kim, Jae-Sung; Lee, Jeong-Woo; Cho, Woong

    2017-02-01

    The purpose of this study was to evaluate the impact of dosimetric and radiobiological parameters on treatment plans by using different dose-calculation algorithms and delivery-beam modes for prostate stereotactic body radiation therapy using an endorectal balloon. For 20 patients with prostate cancer, stereotactic body radiation therapy (SBRT) plans were generated by using a 10-MV photon beam with flattening filter (FF) and flattening-filter-free (FFF) modes. The total treatment dose prescribed was 42.7 Gy in 7 fractions to cover at least 95% of the planning target volume (PTV) with 95% of the prescribed dose. The dose computation was initially performed using an anisotropic analytical algorithm (AAA) in the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) and was then re-calculated using Acuros XB (AXB V. 11.0.34) with the same monitor units and multileaf collimator files. The dosimetric and the radiobiological parameters for the PTV and organs at risk (OARs) were analyzed from the dose-volume histogram. An obvious difference in dosimetric parameters between the AAA and the AXB plans was observed in the PTV and rectum. Doses to the PTV, excluding the maximum dose, were always higher in the AAA plans than in the AXB plans. However, doses to the other OARs were similar in both algorithm plans. In addition, no difference was observed in the dosimetric parameters for different delivery-beam modes when using the same algorithm to generate plans. As a result of the dosimetric parameters, the radiobiological parameters for the two algorithm plans presented an apparent difference in the PTV and the rectum. The average tumor control probability of the AAA plans was higher than that of the AXB plans. The average normal tissue complication probability (NTCP) to rectum was lower in the AXB plans than in the AAA plans. The AAA and the AXB plans yielded very similar NTCPs for the other OARs. In plans using the same algorithms, the NTCPs for delivery-beam modes showed no differences. This study demonstrated that the dosimetric and the radiobiological parameters for the PTV and the rectum affected the dose-calculation algorithms for prostate SBRT using an endorectal balloon. However, the dosimetric and the radiobiological parameters in the AAA and the AXB plans for other OARs were similar. Furthermore, difference between the dosimetric and the radiobiological parameters for different delivery-beam modes were not found when the same algorithm was used to generate the treatment plan.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cebe, M; Pacaci, P; Mabhouti, H

    Purpose: In this study, the two available calculation algorithms of the Varian Eclipse treatment planning system(TPS), the electron Monte Carlo(eMC) and General Gaussian Pencil Beam(GGPB) algorithms were used to compare measured and calculated peripheral dose distribution of electron beams. Methods: Peripheral dose measurements were carried out for 6, 9, 12, 15, 18 and 22 MeV electron beams of Varian Triology machine using parallel plate ionization chamber and EBT3 films in the slab phantom. Measurements were performed for 6×6, 10×10 and 25×25cm{sup 2} cone sizes at dmax of each energy up to 20cm beyond the field edges. Using the same filmmore » batch, the net OD to dose calibration curve was obtained for each energy. Films were scanned 48 hours after irradiation using an Epson 1000XL flatbed scanner. Dose distribution measured using parallel plate ionization chamber and EBT3 film and calculated by eMC and GGPB algorithms were compared. The measured and calculated data were then compared to find which algorithm calculates peripheral dose distribution more accurately. Results: The agreement between measurement and eMC was better than GGPB. The TPS underestimated the out of field doses. The difference between measured and calculated doses increase with the cone size. The largest deviation between calculated and parallel plate ionization chamber measured dose is less than 4.93% for eMC, but it can increase up to 7.51% for GGPB. For film measurement, the minimum gamma analysis passing rates between measured and calculated dose distributions were 98.2% and 92.7% for eMC and GGPB respectively for all field sizes and energies. Conclusion: Our results show that the Monte Carlo algorithm for electron planning in Eclipse is more accurate than previous algorithms for peripheral dose distributions. It must be emphasized that the use of GGPB for planning large field treatments with 6 MeV could lead to inaccuracies of clinical significance.« less

  20. SU-E-T-538: Evaluation of IMRT Dose Calculation Based on Pencil-Beam and AAA Algorithms.

    PubMed

    Yuan, Y; Duan, J; Popple, R; Brezovich, I

    2012-06-01

    To evaluate the accuracy of dose calculation for intensity modulated radiation therapy (IMRT) based on Pencil Beam (PB) and Analytical Anisotropic Algorithm (AAA) computation algorithms. IMRT plans of twelve patients with different treatment sites, including head/neck, lung and pelvis, were investigated. For each patient, dose calculation with PB and AAA algorithms using dose grid sizes of 0.5 mm, 0.25 mm, and 0.125 mm, were compared with composite-beam ion chamber and film measurements in patient specific QA. Discrepancies between the calculation and the measurement were evaluated by percentage error for ion chamber dose and γ〉l failure rate in gamma analysis (3%/3mm) for film dosimetry. For 9 patients, ion chamber dose calculated with AAA-algorithms is closer to ion chamber measurement than that calculated with PB algorithm with grid size of 2.5 mm, though all calculated ion chamber doses are within 3% of the measurements. For head/neck patients and other patients with large treatment volumes, γ〉l failure rate is significantly reduced (within 5%) with AAA-based treatment planning compared to generally more than 10% with PB-based treatment planning (grid size=2.5 mm). For lung and brain cancer patients with medium and small treatment volumes, γ〉l failure rates are typically within 5% for both AAA and PB-based treatment planning (grid size=2.5 mm). For both PB and AAA-based treatment planning, improvements of dose calculation accuracy with finer dose grids were observed in film dosimetry of 11 patients and in ion chamber measurements for 3 patients. AAA-based treatment planning provides more accurate dose calculation for head/neck patients and other patients with large treatment volumes. Compared with film dosimetry, a γ〉l failure rate within 5% can be achieved for AAA-based treatment planning. © 2012 American Association of Physicists in Medicine.

  1. Direct dose mapping versus energy/mass transfer mapping for 4D dose accumulation: fundamental differences and dosimetric consequences.

    PubMed

    Li, Haisen S; Zhong, Hualiang; Kim, Jinkoo; Glide-Hurst, Carri; Gulam, Misbah; Nurushev, Teamour S; Chetty, Indrin J

    2014-01-06

    The direct dose mapping (DDM) and energy/mass transfer (EMT) mapping are two essential algorithms for accumulating the dose from different anatomic phases to the reference phase when there is organ motion or tumor/tissue deformation during the delivery of radiation therapy. DDM is based on interpolation of the dose values from one dose grid to another and thus lacks rigor in defining the dose when there are multiple dose values mapped to one dose voxel in the reference phase due to tissue/tumor deformation. On the other hand, EMT counts the total energy and mass transferred to each voxel in the reference phase and calculates the dose by dividing the energy by mass. Therefore it is based on fundamentally sound physics principles. In this study, we implemented the two algorithms and integrated them within the Eclipse treatment planning system. We then compared the clinical dosimetric difference between the two algorithms for ten lung cancer patients receiving stereotactic radiosurgery treatment, by accumulating the delivered dose to the end-of-exhale (EE) phase. Specifically, the respiratory period was divided into ten phases and the dose to each phase was calculated and mapped to the EE phase and then accumulated. The displacement vector field generated by Demons-based registration of the source and reference images was used to transfer the dose and energy. The DDM and EMT algorithms produced noticeably different cumulative dose in the regions with sharp mass density variations and/or high dose gradients. For the planning target volume (PTV) and internal target volume (ITV) minimum dose, the difference was up to 11% and 4% respectively. This suggests that DDM might not be adequate for obtaining an accurate dose distribution of the cumulative plan, instead, EMT should be considered.

  2. Direct dose mapping versus energy/mass transfer mapping for 4D dose accumulation: fundamental differences and dosimetric consequences

    NASA Astrophysics Data System (ADS)

    Li, Haisen S.; Zhong, Hualiang; Kim, Jinkoo; Glide-Hurst, Carri; Gulam, Misbah; Nurushev, Teamour S.; Chetty, Indrin J.

    2014-01-01

    The direct dose mapping (DDM) and energy/mass transfer (EMT) mapping are two essential algorithms for accumulating the dose from different anatomic phases to the reference phase when there is organ motion or tumor/tissue deformation during the delivery of radiation therapy. DDM is based on interpolation of the dose values from one dose grid to another and thus lacks rigor in defining the dose when there are multiple dose values mapped to one dose voxel in the reference phase due to tissue/tumor deformation. On the other hand, EMT counts the total energy and mass transferred to each voxel in the reference phase and calculates the dose by dividing the energy by mass. Therefore it is based on fundamentally sound physics principles. In this study, we implemented the two algorithms and integrated them within the Eclipse treatment planning system. We then compared the clinical dosimetric difference between the two algorithms for ten lung cancer patients receiving stereotactic radiosurgery treatment, by accumulating the delivered dose to the end-of-exhale (EE) phase. Specifically, the respiratory period was divided into ten phases and the dose to each phase was calculated and mapped to the EE phase and then accumulated. The displacement vector field generated by Demons-based registration of the source and reference images was used to transfer the dose and energy. The DDM and EMT algorithms produced noticeably different cumulative dose in the regions with sharp mass density variations and/or high dose gradients. For the planning target volume (PTV) and internal target volume (ITV) minimum dose, the difference was up to 11% and 4% respectively. This suggests that DDM might not be adequate for obtaining an accurate dose distribution of the cumulative plan, instead, EMT should be considered.

  3. The accuracy of the out-of-field dose calculations using a model based algorithm in a commercial treatment planning system

    NASA Astrophysics Data System (ADS)

    Wang, Lilie; Ding, George X.

    2014-07-01

    The out-of-field dose can be clinically important as it relates to the dose of the organ-at-risk, although the accuracy of its calculation in commercial radiotherapy treatment planning systems (TPSs) receives less attention. This study evaluates the uncertainties of out-of-field dose calculated with a model based dose calculation algorithm, anisotropic analytical algorithm (AAA), implemented in a commercial radiotherapy TPS, Varian Eclipse V10, by using Monte Carlo (MC) simulations, in which the entire accelerator head is modeled including the multi-leaf collimators. The MC calculated out-of-field doses were validated by experimental measurements. The dose calculations were performed in a water phantom as well as CT based patient geometries and both static and highly modulated intensity-modulated radiation therapy (IMRT) fields were evaluated. We compared the calculated out-of-field doses, defined as lower than 5% of the prescription dose, in four H&N cancer patients and two lung cancer patients treated with volumetric modulated arc therapy (VMAT) and IMRT techniques. The results show that the discrepancy of calculated out-of-field dose profiles between AAA and the MC depends on the depth and is generally less than 1% for in water phantom comparisons and in CT based patient dose calculations for static field and IMRT. In cases of VMAT plans, the difference between AAA and MC is <0.5%. The clinical impact resulting from the error on the calculated organ doses were analyzed by using dose-volume histograms. Although the AAA algorithm significantly underestimated the out-of-field doses, the clinical impact on the calculated organ doses in out-of-field regions may not be significant in practice due to very low out-of-field doses relative to the target dose.

  4. A MAP blind image deconvolution algorithm with bandwidth over-constrained

    NASA Astrophysics Data System (ADS)

    Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong

    2018-03-01

    We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.

  5. Global, Multi-Objective Trajectory Optimization With Parametric Spreading

    NASA Technical Reports Server (NTRS)

    Vavrina, Matthew A.; Englander, Jacob A.; Phillips, Sean M.; Hughes, Kyle M.

    2017-01-01

    Mission design problems are often characterized by multiple, competing trajectory optimization objectives. Recent multi-objective trajectory optimization formulations enable generation of globally-optimal, Pareto solutions via a multi-objective genetic algorithm. A byproduct of these formulations is that clustering in design space can occur in evolving the population towards the Pareto front. This clustering can be a drawback, however, if parametric evaluations of design variables are desired. This effort addresses clustering by incorporating operators that encourage a uniform spread over specified design variables while maintaining Pareto front representation. The algorithm is demonstrated on a Neptune orbiter mission, and enhanced multidimensional visualization strategies are presented.

  6. SU-E-T-339: Dosimetric Verification of Acuros XB Dose Calculation Algorithm On An Air Cavity for 6-MV Flattening Filter-Free Beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, S; Suh, T; Chung, J

    Purpose: This study was to verify the accuracy of Acuros XB (AXB) dose calculation algorithm on an air cavity for a single radiation field using 6-MV flattening filter-free (FFF) beam. Methods: A rectangular slab phantom containing an air cavity was made for this study. The CT images of the phantom for dose calculation were scanned with and without film at measurement depths (4.5, 5.5, 6.5 and 7.5 cm). The central axis doses (CADs) and the off-axis doses (OADs) were measured by film and calculated with Analytical Anisotropic Algorithm (AAA) and AXB for field sizes ranging from 2 Χ 2 tomore » 5 Χ 5 cm{sup 2} of 6-MV FFF beams. Both algorithms were divided into AXB-w and AAA -w when included the film in phantom for dose calculation, and AXB-w/o and AAA-w/o in calculation without film. The calculated OADs for both algorithms were compared with the measured OADs and difference values were determined using root means squares error (RMSE) and gamma evaluation. Results: The percentage differences (%Diffs) between the measured and calculated CAD for AXB-w was most agreement than others. Compared to the %Diff with and without film, the %Diffs with film were decreased than without within both algorithms. The %Diffs for both algorithms were reduced with increasing field size and increased relative to the depth increment. RMSEs of CAD for AXB-w were within 10.32% for both inner-profile and penumbra, while the corresponding values of AAA-w appeared to 96.50%. Conclusion: This study demonstrated that the dose calculation with AXB within air cavity shows more accurate than with AAA compared to the measured dose. Furthermore, we found that the AXB-w was superior to AXB-w/o in this region when compared against the measurements.« less

  7. SU-F-I-09: Improvement of Image Registration Using Total-Variation Based Noise Reduction Algorithms for Low-Dose CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, S; Farr, J; Merchant, T

    Purpose: To study the effect of total-variation based noise reduction algorithms to improve the image registration of low-dose CBCT for patient positioning in radiation therapy. Methods: In low-dose CBCT, the reconstructed image is degraded by excessive quantum noise. In this study, we developed a total-variation based noise reduction algorithm and studied the effect of the algorithm on noise reduction and image registration accuracy. To study the effect of noise reduction, we have calculated the peak signal-to-noise ratio (PSNR). To study the improvement of image registration, we performed image registration between volumetric CT and MV- CBCT images of different head-and-neck patientsmore » and calculated the mutual information (MI) and Pearson correlation coefficient (PCC) as a similarity metric. The PSNR, MI and PCC were calculated for both the noisy and noise-reduced CBCT images. Results: The algorithms were shown to be effective in reducing the noise level and improving the MI and PCC for the low-dose CBCT images tested. For the different head-and-neck patients, a maximum improvement of PSNR of 10 dB with respect to the noisy image was calculated. The improvement of MI and PCC was 9% and 2% respectively. Conclusion: Total-variation based noise reduction algorithm was studied to improve the image registration between CT and low-dose CBCT. The algorithm had shown promising results in reducing the noise from low-dose CBCT images and improving the similarity metric in terms of MI and PCC.« less

  8. Warfarin Pharmacogenomics in Diverse Populations.

    PubMed

    Kaye, Justin B; Schultz, Lauren E; Steiner, Heidi E; Kittles, Rick A; Cavallari, Larisa H; Karnes, Jason H

    2017-09-01

    Genotype-guided warfarin dosing algorithms are a rational approach to optimize warfarin dosing and potentially reduce adverse drug events. Diverse populations, such as African Americans and Latinos, have greater variability in warfarin dose requirements and are at greater risk for experiencing warfarin-related adverse events compared with individuals of European ancestry. Although these data suggest that patients of diverse populations may benefit from improved warfarin dose estimation, the vast majority of literature on genotype-guided warfarin dosing, including data from prospective randomized trials, is in populations of European ancestry. Despite differing frequencies of variants by race/ethnicity, most evidence in diverse populations evaluates variants that are most common in populations of European ancestry. Algorithms that do not include variants important across race/ethnic groups are unlikely to benefit diverse populations. In some race/ethnic groups, development of race-specific or admixture-based algorithms may facilitate improved genotype-guided warfarin dosing algorithms above and beyond that seen in individuals of European ancestry. These observations should be considered in the interpretation of literature evaluating the clinical utility of genotype-guided warfarin dosing. Careful consideration of race/ethnicity and additional evidence focused on improving warfarin dosing algorithms across race/ethnic groups will be necessary for successful clinical implementation of warfarin pharmacogenomics. The evidence for warfarin pharmacogenomics has a broad significance for pharmacogenomic testing, emphasizing the consideration of race/ethnicity in discovery of gene-drug pairs and development of clinical recommendations for pharmacogenetic testing. © 2017 Pharmacotherapy Publications, Inc.

  9. Influence of radiation dose and iterative reconstruction algorithms for measurement accuracy and reproducibility of pulmonary nodule volumetry: A phantom study.

    PubMed

    Kim, Hyungjin; Park, Chang Min; Song, Yong Sub; Lee, Sang Min; Goo, Jin Mo

    2014-05-01

    To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. CT scans were performed on a chest phantom containing various nodules (10 and 12mm; +100, -630 and -800HU) at 120kVp with tube current-time settings of 10, 20, 50, and 100mAs. Each CT was reconstructed using filtered back projection (FBP), iDose(4) and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p>0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose(4) at all radiation dose settings (p<0.05). Semi-automated nodule volumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Pediatric chest HRCT using the iDose4 Hybrid Iterative Reconstruction Algorithm: Which iDose level to choose?

    NASA Astrophysics Data System (ADS)

    Smarda, M.; Alexopoulou, E.; Mazioti, A.; Kordolaimi, S.; Ploussi, A.; Priftis, K.; Efstathopoulos, E.

    2015-09-01

    Purpose of the study is to determine the appropriate iterative reconstruction (IR) algorithm level that combines image quality and diagnostic confidence, for pediatric patients undergoing high-resolution computed tomography (HRCT). During the last 2 years, a total number of 20 children up to 10 years old with a clinical presentation of chronic bronchitis underwent HRCT in our department's 64-detector row CT scanner using the iDose IR algorithm, with almost similar image settings (80kVp, 40-50 mAs). CT images were reconstructed with all iDose levels (level 1 to 7) as well as with filtered-back projection (FBP) algorithm. Subjective image quality was evaluated by 2 experienced radiologists in terms of image noise, sharpness, contrast and diagnostic acceptability using a 5-point scale (1=excellent image, 5=non-acceptable image). Artifacts existance was also pointed out. All mean scores from both radiologists corresponded to satisfactory image quality (score ≤3), even with the FBP algorithm use. Almost excellent (score <2) overall image quality was achieved with iDose levels 5 to 7, but oversmoothing artifacts appearing with iDose levels 6 and 7 affected the diagnostic confidence. In conclusion, the use of iDose level 5 enables almost excellent image quality without considerable artifacts affecting the diagnosis. Further evaluation is needed in order to draw more precise conclusions.

  11. Analytical probabilistic modeling of RBE-weighted dose for ion therapy.

    PubMed

    Wieser, H P; Hennig, P; Wahl, N; Bangert, M

    2017-11-10

    Particle therapy is especially prone to uncertainties. This issue is usually addressed with uncertainty quantification and minimization techniques based on scenario sampling. For proton therapy, however, it was recently shown that it is also possible to use closed-form computations based on analytical probabilistic modeling (APM) for this purpose. APM yields unique features compared to sampling-based approaches, motivating further research in this context. This paper demonstrates the application of APM for intensity-modulated carbon ion therapy to quantify the influence of setup and range uncertainties on the RBE-weighted dose. In particular, we derive analytical forms for the nonlinear computations of the expectation value and variance of the RBE-weighted dose by propagating linearly correlated Gaussian input uncertainties through a pencil beam dose calculation algorithm. Both exact and approximation formulas are presented for the expectation value and variance of the RBE-weighted dose and are subsequently studied in-depth for a one-dimensional carbon ion spread-out Bragg peak. With V and B being the number of voxels and pencil beams, respectively, the proposed approximations induce only a marginal loss of accuracy while lowering the computational complexity from order [Formula: see text] to [Formula: see text] for the expectation value and from [Formula: see text] to [Formula: see text] for the variance of the RBE-weighted dose. Moreover, we evaluated the approximated calculation of the expectation value and standard deviation of the RBE-weighted dose in combination with a probabilistic effect-based optimization on three patient cases considering carbon ions as radiation modality against sampled references. The resulting global γ-pass rates (2 mm,2%) are [Formula: see text]99.15% for the expectation value and [Formula: see text]94.95% for the standard deviation of the RBE-weighted dose, respectively. We applied the derived analytical model to carbon ion treatment planning, although the concept is in general applicable to other ion species considering a variable RBE.

  12. Analytical probabilistic modeling of RBE-weighted dose for ion therapy

    NASA Astrophysics Data System (ADS)

    Wieser, H. P.; Hennig, P.; Wahl, N.; Bangert, M.

    2017-12-01

    Particle therapy is especially prone to uncertainties. This issue is usually addressed with uncertainty quantification and minimization techniques based on scenario sampling. For proton therapy, however, it was recently shown that it is also possible to use closed-form computations based on analytical probabilistic modeling (APM) for this purpose. APM yields unique features compared to sampling-based approaches, motivating further research in this context. This paper demonstrates the application of APM for intensity-modulated carbon ion therapy to quantify the influence of setup and range uncertainties on the RBE-weighted dose. In particular, we derive analytical forms for the nonlinear computations of the expectation value and variance of the RBE-weighted dose by propagating linearly correlated Gaussian input uncertainties through a pencil beam dose calculation algorithm. Both exact and approximation formulas are presented for the expectation value and variance of the RBE-weighted dose and are subsequently studied in-depth for a one-dimensional carbon ion spread-out Bragg peak. With V and B being the number of voxels and pencil beams, respectively, the proposed approximations induce only a marginal loss of accuracy while lowering the computational complexity from order O(V × B^2) to O(V × B) for the expectation value and from O(V × B^4) to O(V × B^2) for the variance of the RBE-weighted dose. Moreover, we evaluated the approximated calculation of the expectation value and standard deviation of the RBE-weighted dose in combination with a probabilistic effect-based optimization on three patient cases considering carbon ions as radiation modality against sampled references. The resulting global γ-pass rates (2 mm,2%) are > 99.15% for the expectation value and > 94.95% for the standard deviation of the RBE-weighted dose, respectively. We applied the derived analytical model to carbon ion treatment planning, although the concept is in general applicable to other ion species considering a variable RBE.

  13. Characterisation of mega-voltage electron pencil beam dose distributions: viability of a measurement-based approach.

    PubMed

    Barnes, M P; Ebert, M A

    2008-03-01

    The concept of electron pencil-beam dose distributions is central to pencil-beam algorithms used in electron beam radiotherapy treatment planning. The Hogstrom algorithm, which is a common algorithm for electron treatment planning, models large electron field dose distributions by the superposition of a series of pencil beam dose distributions. This means that the accurate characterisation of an electron pencil beam is essential for the accuracy of the dose algorithm. The aim of this study was to evaluate a measurement based approach for obtaining electron pencil-beam dose distributions. The primary incentive for the study was the accurate calculation of dose distributions for narrow fields as traditional electron algorithms are generally inaccurate for such geometries. Kodak X-Omat radiographic film was used in a solid water phantom to measure the dose distribution of circular 12 MeV beams from a Varian 21EX linear accelerator. Measurements were made for beams of diameter, 1.5, 2, 4, 8, 16 and 32 mm. A blocked-field technique was used to subtract photon contamination in the beam. The "error function" derived from Fermi-Eyges Multiple Coulomb Scattering (MCS) theory for corresponding square fields was used to fit resulting dose distributions so that extrapolation down to a pencil beam distribution could be made. The Monte Carlo codes, BEAM and EGSnrc were used to simulate the experimental arrangement. The 8 mm beam dose distribution was also measured with TLD-100 microcubes. Agreement between film, TLD and Monte Carlo simulation results were found to be consistent with the spatial resolution used. The study has shown that it is possible to extrapolate narrow electron beam dose distributions down to a pencil beam dose distribution using the error function. However, due to experimental uncertainties and measurement difficulties, Monte Carlo is recommended as the method of choice for characterising electron pencil-beam dose distributions.

  14. Locating the source of diffusion in complex networks by time-reversal backward spreading.

    PubMed

    Shen, Zhesi; Cao, Shinan; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene

    2016-03-01

    Locating the source that triggers a dynamical process is a fundamental but challenging problem in complex networks, ranging from epidemic spreading in society and on the Internet to cancer metastasis in the human body. An accurate localization of the source is inherently limited by our ability to simultaneously access the information of all nodes in a large-scale complex network. This thus raises two critical questions: how do we locate the source from incomplete information and can we achieve full localization of sources at any possible location from a given set of observable nodes. Here we develop a time-reversal backward spreading algorithm to locate the source of a diffusion-like process efficiently and propose a general locatability condition. We test the algorithm by employing epidemic spreading and consensus dynamics as typical dynamical processes and apply it to the H1N1 pandemic in China. We find that the sources can be precisely located in arbitrary networks insofar as the locatability condition is assured. Our tools greatly improve our ability to locate the source of diffusion in complex networks based on limited accessibility of nodal information. Moreover, they have implications for controlling a variety of dynamical processes taking place on complex networks, such as inhibiting epidemics, slowing the spread of rumors, pollution control, and environmental protection.

  15. Locating the source of diffusion in complex networks by time-reversal backward spreading

    NASA Astrophysics Data System (ADS)

    Shen, Zhesi; Cao, Shinan; Wang, Wen-Xu; Di, Zengru; Stanley, H. Eugene

    2016-03-01

    Locating the source that triggers a dynamical process is a fundamental but challenging problem in complex networks, ranging from epidemic spreading in society and on the Internet to cancer metastasis in the human body. An accurate localization of the source is inherently limited by our ability to simultaneously access the information of all nodes in a large-scale complex network. This thus raises two critical questions: how do we locate the source from incomplete information and can we achieve full localization of sources at any possible location from a given set of observable nodes. Here we develop a time-reversal backward spreading algorithm to locate the source of a diffusion-like process efficiently and propose a general locatability condition. We test the algorithm by employing epidemic spreading and consensus dynamics as typical dynamical processes and apply it to the H1N1 pandemic in China. We find that the sources can be precisely located in arbitrary networks insofar as the locatability condition is assured. Our tools greatly improve our ability to locate the source of diffusion in complex networks based on limited accessibility of nodal information. Moreover, they have implications for controlling a variety of dynamical processes taking place on complex networks, such as inhibiting epidemics, slowing the spread of rumors, pollution control, and environmental protection.

  16. Traffic-driven epidemic spreading on scale-free networks with tunable degree distribution

    NASA Astrophysics Data System (ADS)

    Yang, Han-Xin; Wang, Bing-Hong

    2016-04-01

    We study the traffic-driven epidemic spreading on scale-free networks with tunable degree distribution. The heterogeneity of networks is controlled by the exponent γ of power-law degree distribution. It is found that the epidemic threshold is minimized at about γ=2.2. Moreover, we find that nodes with larger algorithmic betweenness are more likely to be infected. We expect our work to provide new insights in to the effect of network structures on traffic-driven epidemic spreading.

  17. Dosimetric Evaluation of Metal Artefact Reduction using Metal Artefact Reduction (MAR) Algorithm and Dual-energy Computed Tomography (CT) Method

    NASA Astrophysics Data System (ADS)

    Laguda, Edcer Jerecho

    Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient's medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method. Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated. Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated. Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.

  18. Biological dosimetry by the triage dicentric chromosome assay: potential implications for treatment of acute radiation syndrome in radiological mass casualties.

    PubMed

    Romm, Horst; Wilkins, Ruth C; Coleman, C Norman; Lillis-Hearne, Patricia K; Pellmar, Terry C; Livingston, Gordon K; Awa, Akio A; Jenkins, Mark S; Yoshida, Mitsuaki A; Oestreicher, Ursula; Prasanna, Pataje G S

    2011-03-01

    Biological dosimetry is an essential tool for estimating radiation dose. The dicentric chromosome assay (DCA) is currently the tool of choice. Because the assay is labor-intensive and time-consuming, strategies are needed to increase throughput for use in radiation mass casualty incidents. One such strategy is to truncate metaphase spread analysis for triage dose estimates by scoring 50 or fewer metaphases, compared to a routine analysis of 500 to 1000 metaphases, and to increase throughput using a large group of scorers in a biodosimetry network. Previously, the National Institutes for Allergies and Infectious Diseases (NIAID) and the Armed Forces Radiobiology Research Institute (AFRRI) sponsored a double-blinded interlaboratory comparison among five established international cytogenetic biodosimetry laboratories to determine the variability in calibration curves and in dose measurements in unknown, irradiated samples. In the present study, we further analyzed the published data from this previous study to investigate how the number of metaphase spreads influences dose prediction accuracy and how this information could be of value in the triage and management of people at risk for the acute radiation syndrome (ARS). Although, as expected, accuracy decreased with lower numbers of metaphase spreads analyzed, predicted doses by the laboratories were in good agreement and were judged to be adequate to guide diagnosis and treatment of ARS. These results demonstrate that for rapid triage, a network of cytogenetic biodosimetry laboratories can accurately assess doses even with a lower number of scored metaphases.

  19. Automated coronary artery calcification detection on low-dose chest CT images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Cham, Matthew D.; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.

    2014-03-01

    Coronary artery calcification (CAC) measurement from low-dose CT images can be used to assess the risk of coronary artery disease. A fully automatic algorithm to detect and measure CAC from low-dose non-contrast, non-ECG-gated chest CT scans is presented. Based on the automatically detected CAC, the Agatston score (AS), mass score and volume score were computed. These were compared with scores obtained manually from standard-dose ECG-gated scans and low-dose un-gated scans of the same patient. The automatic algorithm segments the heart region based on other pre-segmented organs to provide a coronary region mask. The mitral valve and aortic valve calcification is identified and excluded. All remaining voxels greater than 180HU within the mask region are considered as CAC candidates. The heart segmentation algorithm was evaluated on 400 non-contrast cases with both low-dose and regular dose CT scans. By visual inspection, 371 (92.8%) of the segmentations were acceptable. The automated CAC detection algorithm was evaluated on 41 low-dose non-contrast CT scans. Manual markings were performed on both low-dose and standard-dose scans for these cases. Using linear regression, the correlation of the automatic AS with the standard-dose manual scores was 0.86; with the low-dose manual scores the correlation was 0.91. Standard risk categories were also computed. The automated method risk category agreed with manual markings of gated scans for 24 cases while 15 cases were 1 category off. For low-dose scans, the automatic method agreed with 33 cases while 7 cases were 1 category off.

  20. Migraine prophylaxis, ischemic depolarizations and stroke outcomes in mice

    PubMed Central

    Eikermann-Haerter, Katharina; Lee, Jeong Hyun; Yalcin, Nilufer; Yu, Esther Sori; Daneshmand, Ali; Wei, Ying; Zheng, Yi; Can, Anil; Sengul, Buse; Ferrari, Michel D.; van den Maagdenberg, Arn M. J. M.; Ayata, Cenk

    2014-01-01

    Background and Purpose Migraine with aura is an established stroke risk factor, and excitatory mechanisms such as spreading depression are implicated in the pathogenesis of both migraine and stroke. Spontaneous spreading depression waves originate within the peri-infarct tissue and exacerbate the metabolic mismatch during focal cerebral ischemia. Genetically enhanced spreading depression susceptibility facilitates anoxic depolarizations and peri-infarct spreading depressions and accelerates infarct growth, suggesting that susceptibility to spreading depression is a critical determinant of vulnerability to ischemic injury. Because chronic treatment with migraine prophylactic drugs suppresses spreading depression susceptibility, we tested whether migraine prophylaxis can also suppress ischemic depolarizations and improve stroke outcome. Methods We measured the cortical susceptibility to spreading depression and ischemic depolarizations, and determined tissue and neurological outcome after middle cerebral artery occlusion in wild type and familial hemiplegic migraine type 1 knock-in mice treated with vehicle, topiramate or lamotrigine daily for 7 weeks or as a single dose shortly before testing. Results Chronic treatment with topiramate or lamotrigine reduces the susceptibility to KCl- or electrical stimulation-induced spreading depressions as well as ischemic depolarizations in both wild-type and familial hemiplegic migraine type 1 mutant mice. Consequently, both tissue and neurological outcomes are improved. Notably, treatment with a single dose of either drug is ineffective. Conclusions These data underscore the importance of hyperexcitability as a mechanism for increased stroke risk in migraineurs, and suggest that migraine prophylaxis may not only prevent migraine attacks but also protect migraineurs against ischemic injury. PMID:25424478

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokhrel, D; Sood, S; Badkul, R

    Purpose: To compare dose distributions calculated using PB-hete vs. XVMC algorithms for SRT treatments of cavernous sinus tumors. Methods: Using PB-hete SRT, five patients with cavernous sinus tumors received the prescription dose of 25 Gy in 5 fractions for planning target volume PTV(V100%)=95%. Gross tumor volume (GTV) and organs at risk (OARs) were delineated on T1/T2 MRI-CT-fused images. PTV (range 2.1–84.3cc, mean=21.7cc) was generated using a 5mm uniform-margin around GTV. PB-hete SRT plans included a combination of non-coplanar conformal arcs/static beams delivered by Novalis-TX consisting of HD-MLCs and a 6MV-SRS(1000 MU/min) beam. Plans were re-optimized using XVMC algorithm with identicalmore » beam geometry and MLC positions. Comparison of plan specific PTV(V99%), maximal, mean, isocenter doses, and total monitor units(MUs) were evaluated. Maximal dose to OARs such as brainstem, optic-pathway, spinal cord, and lenses as well as normal tissue volume receiving 12Gy(V12) were compared between two algorithms. All analysis was performed using two-tailed paired t-tests of an upper-bound p-value of <0.05. Results: Using either algorithm, no dosimetrically significant differences in PTV coverage (PTVV99%,maximal, mean, isocenter doses) and total number of MUs were observed (all p-values >0.05, mean ratios within 2%). However, maximal doses to optic-chiasm and nerves were significantly under-predicted using PB-hete (p=0.04). Maximal brainstem, spinal cord, lens dose and V12 were all comparable between two algorithms, with exception of one patient with the largest PTV who exhibited 11% higher V12 with XVMC. Conclusion: Unlike lung tumors, XVMC and PB-hete treatment plans provided similar PTV coverage for cavernous sinus tumors. Majority of OARs doses were comparable between two algorithms, except for small structures such as optic chiasm/nerves which could potentially receive higher doses when using XVMC algorithm. Special attention may need to be paid on a case-by-case basis when planning for sinus SRT based on tumor size and location to OARs particularly the optic apparatus.« less

  2. Comparison of normal tissue dose calculation methods for epidemiological studies of radiotherapy patients.

    PubMed

    Mille, Matthew M; Jung, Jae Won; Lee, Choonik; Kuzmin, Gleb A; Lee, Choonsik

    2018-06-01

    Radiation dosimetry is an essential input for epidemiological studies of radiotherapy patients aimed at quantifying the dose-response relationship of late-term morbidity and mortality. Individualised organ dose must be estimated for all tissues of interest located in-field, near-field, or out-of-field. Whereas conventional measurement approaches are limited to points in water or anthropomorphic phantoms, computational approaches using patient images or human phantoms offer greater flexibility and can provide more detailed three-dimensional dose information. In the current study, we systematically compared four different dose calculation algorithms so that dosimetrists and epidemiologists can better understand the advantages and limitations of the various approaches at their disposal. The four dose calculations algorithms considered were as follows: the (1) Analytical Anisotropic Algorithm (AAA) and (2) Acuros XB algorithm (Acuros XB), as implemented in the Eclipse treatment planning system (TPS); (3) a Monte Carlo radiation transport code, EGSnrc; and (4) an accelerated Monte Carlo code, the x-ray Voxel Monte Carlo (XVMC). The four algorithms were compared in terms of their accuracy and appropriateness in the context of dose reconstruction for epidemiological investigations. Accuracy in peripheral dose was evaluated first by benchmarking the calculated dose profiles against measurements in a homogeneous water phantom. Additional simulations in a heterogeneous cylinder phantom evaluated the performance of the algorithms in the presence of tissue heterogeneity. In general, we found that the algorithms contained within the commercial TPS (AAA and Acuros XB) were fast and accurate in-field or near-field, but not acceptable out-of-field. Therefore, the TPS is best suited for epidemiological studies involving large cohorts and where the organs of interest are located in-field or partially in-field. The EGSnrc and XVMC codes showed excellent agreement with measurements both in-field and out-of-field. The EGSnrc code was the most accurate dosimetry approach, but was too slow to be used for large-scale epidemiological cohorts. The XVMC code showed similar accuracy to EGSnrc, but was significantly faster, and thus epidemiological applications seem feasible, especially when the organs of interest reside far away from the field edge.

  3. TU-AB-201-04: Optimizing the Number of Catheter Implants and Their Tracks for Prostate HDR Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riofrio, D; Luan, S; Zhou, J

    Purpose: In prostate HDR brachytherapy, interstitial implants are placed manually on the fly. The aim for this research is to develop a computer algorithm to find optimal and reliable implant trajectories using minimal number of implants. Methods: Our new algorithm mainly uses these key ideas: (1) positive charged static particles are uniformly placed on the surface of prostate and critical structures such as urethra, bladder, and rectum. (2) Positive charged kinetic particles are placed at a cross-section of the prostate with an initial velocity parallel to the principal implant direction. (3) The kinetic particles move through the prostate, interacting withmore » each other, spreading out, while staying away from the prostate surface and critical structures. The initial velocity ensures that the trajectories observe the curvature constraints of typical implant procedures. (4) The finial trajectories of kinetic particles are smoothed using a third-degree polynomial regression, which become the implant trajectories. (5) The dwelling times and final dose distribution are calculated using least-distance programming. Results: (1) We experimented with previously treated cases. Our plan achieves all prescription goals while reducing the number of implants by 41%! Our plan also has less uniform target dose, which implies a higher dose is delivered to the prostate. (2) We expect future implant procedures will be performed under the guidance of such pre-calculated trajectories. To assess the applicability, we randomly perturb the tracks to mimic the manual implant errors. Our studies showed the impact of these perturbations are negligible, which is compensated by the least distance programming. Conclusions: We developed a new inverse planning system for prostate HDR therapy that can find optimal implant trajectories while minimizing the number of implants. For future work, we plan to integrate our new inverse planning system with an existing needle tracking system.« less

  4. Interpretation of gypsy moth frontal advance using meteorology in a conditional algorithm

    Treesearch

    K.L. Frank; P.C. Tobin; Jr. Thistle; Laurence S. Kalkstein

    2013-01-01

    The gypsy moth, Lymantria dispar, is a nonnative species that continues to invade areas in North America. It spreads generally through stratified dispersal where local growth and diffusive spread are coupled with long-distance jumps ahead of the leading edge. Long distance jumps due to anthropogenic movement of life stages is a well-documented...

  5. Deep Convolutional Framelet Denosing for Low-Dose CT via Wavelet Residual Network.

    PubMed

    Kang, Eunhee; Chang, Won; Yoo, Jaejun; Ye, Jong Chul

    2018-06-01

    Model-based iterative reconstruction algorithms for low-dose X-ray computed tomography (CT) are computationally expensive. To address this problem, we recently proposed a deep convolutional neural network (CNN) for low-dose X-ray CT and won the second place in 2016 AAPM Low-Dose CT Grand Challenge. However, some of the textures were not fully recovered. To address this problem, here we propose a novel framelet-based denoising algorithm using wavelet residual network which synergistically combines the expressive power of deep learning and the performance guarantee from the framelet-based denoising algorithms. The new algorithms were inspired by the recent interpretation of the deep CNN as a cascaded convolution framelet signal representation. Extensive experimental results confirm that the proposed networks have significantly improved performance and preserve the detail texture of the original images.

  6. Improving Cancer Detection and Dose Efficiency in Dedicated Breast Cancer CT

    DTIC Science & Technology

    2010-02-01

    source trajectory and data truncation, which can however be solved with the back-projection filtration ( BPF ) algorithm [6,7]. I have used the BPF ...high to low radiation dose levels. I have investigated noise properties in images reconstructed by use of FDK and BPF algorithms at different noise...analytic algorithms such as the FDK and BPF algorithms are applied to sparse-view data, the reconstruction images will contain artifacts such as streak

  7. Automated algorithm for CBCT-based dose calculations of prostate radiotherapy with bilateral hip prostheses.

    PubMed

    Almatani, Turki; Hugtenburg, Richard P; Lewis, Ryan D; Barley, Susan E; Edwards, Mark A

    2016-10-01

    Cone beam CT (CBCT) images contain more scatter than a conventional CT image and therefore provide inaccurate Hounsfield units (HUs). Consequently, CBCT images cannot be used directly for radiotherapy dose calculation. The aim of this study is to enable dose calculations to be performed with the use of CBCT images taken during radiotherapy and evaluate the necessity of replanning. A patient with prostate cancer with bilateral metallic prosthetic hip replacements was imaged using both CT and CBCT. The multilevel threshold (MLT) algorithm was used to categorize pixel values in the CBCT images into segments of homogeneous HU. The variation in HU with position in the CBCT images was taken into consideration. This segmentation method relies on the operator dividing the CBCT data into a set of volumes where the variation in the relationship between pixel values and HUs is small. An automated MLT algorithm was developed to reduce the operator time associated with the process. An intensity-modulated radiation therapy plan was generated from CT images of the patient. The plan was then copied to the segmented CBCT (sCBCT) data sets with identical settings, and the doses were recalculated and compared. Gamma evaluation showed that the percentage of points in the rectum with γ < 1 (3%/3 mm) were 98.7% and 97.7% in the sCBCT using MLT and the automated MLT algorithms, respectively. Compared with the planning CT (pCT) plan, the MLT algorithm showed -0.46% dose difference with 8 h operator time while the automated MLT algorithm showed -1.3%, which are both considered to be clinically acceptable, when using collapsed cone algorithm. The segmentation of CBCT images using the method in this study can be used for dose calculation. For a patient with prostate cancer with bilateral hip prostheses and the associated issues with CT imaging, the MLT algorithms achieved a sufficient dose calculation accuracy that is clinically acceptable. The automated MLT algorithm reduced the operator time associated with implementing the MLT algorithm to achieve clinically acceptable accuracy. This saved time makes the automated MLT algorithm superior and easier to implement in the clinical setting. The MLT algorithm has been extended to the complex example of a patient with bilateral hip prostheses, which with the introduction of automation is feasible for use in adaptive radiotherapy, as an alternative to obtaining a new pCT and reoutlining the structures.

  8. Toward adaptive radiotherapy for head and neck patients: Uncertainties in dose warping due to the choice of deformable registration algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veiga, Catarina, E-mail: catarina.veiga.11@ucl.ac.uk; Royle, Gary; Lourenço, Ana Mónica

    2015-02-15

    Purpose: The aims of this work were to evaluate the performance of several deformable image registration (DIR) algorithms implemented in our in-house software (NiftyReg) and the uncertainties inherent to using different algorithms for dose warping. Methods: The authors describe a DIR based adaptive radiotherapy workflow, using CT and cone-beam CT (CBCT) imaging. The transformations that mapped the anatomy between the two time points were obtained using four different DIR approaches available in NiftyReg. These included a standard unidirectional algorithm and more sophisticated bidirectional ones that encourage or ensure inverse consistency. The forward (CT-to-CBCT) deformation vector fields (DVFs) were used tomore » propagate the CT Hounsfield units and structures to the daily geometry for “dose of the day” calculations, while the backward (CBCT-to-CT) DVFs were used to remap the dose of the day onto the planning CT (pCT). Data from five head and neck patients were used to evaluate the performance of each implementation based on geometrical matching, physical properties of the DVFs, and similarity between warped dose distributions. Geometrical matching was verified in terms of dice similarity coefficient (DSC), distance transform, false positives, and false negatives. The physical properties of the DVFs were assessed calculating the harmonic energy, determinant of the Jacobian, and inverse consistency error of the transformations. Dose distributions were displayed on the pCT dose space and compared using dose difference (DD), distance to dose difference, and dose volume histograms. Results: All the DIR algorithms gave similar results in terms of geometrical matching, with an average DSC of 0.85 ± 0.08, but the underlying properties of the DVFs varied in terms of smoothness and inverse consistency. When comparing the doses warped by different algorithms, we found a root mean square DD of 1.9% ± 0.8% of the prescribed dose (pD) and that an average of 9% ± 4% of voxels within the treated volume failed a 2%pD DD-test (DD{sub 2%-pp}). Larger DD{sub 2%-pp} was found within the high dose gradient (21% ± 6%) and regions where the CBCT quality was poorer (28% ± 9%). The differences when estimating the mean and maximum dose delivered to organs-at-risk were up to 2.0%pD and 2.8%pD, respectively. Conclusions: The authors evaluated several DIR algorithms for CT-to-CBCT registrations. In spite of all methods resulting in comparable geometrical matching, the choice of DIR implementation leads to uncertainties in dose warped, particularly in regions of high gradient and/or poor imaging quality.« less

  9. Quantitative Image Quality and Histogram-Based Evaluations of an Iterative Reconstruction Algorithm at Low-to-Ultralow Radiation Dose Levels: A Phantom Study in Chest CT

    PubMed Central

    Lee, Ki Baek

    2018-01-01

    Objective To describe the quantitative image quality and histogram-based evaluation of an iterative reconstruction (IR) algorithm in chest computed tomography (CT) scans at low-to-ultralow CT radiation dose levels. Materials and Methods In an adult anthropomorphic phantom, chest CT scans were performed with 128-section dual-source CT at 70, 80, 100, 120, and 140 kVp, and the reference (3.4 mGy in volume CT Dose Index [CTDIvol]), 30%-, 60%-, and 90%-reduced radiation dose levels (2.4, 1.4, and 0.3 mGy). The CT images were reconstructed by using filtered back projection (FBP) algorithms and IR algorithm with strengths 1, 3, and 5. Image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were statistically compared between different dose levels, tube voltages, and reconstruction algorithms. Moreover, histograms of subtraction images before and after standardization in x- and y-axes were visually compared. Results Compared with FBP images, IR images with strengths 1, 3, and 5 demonstrated image noise reduction up to 49.1%, SNR increase up to 100.7%, and CNR increase up to 67.3%. Noteworthy image quality degradations on IR images including a 184.9% increase in image noise, 63.0% decrease in SNR, and 51.3% decrease in CNR, and were shown between 60% and 90% reduced levels of radiation dose (p < 0.0001). Subtraction histograms between FBP and IR images showed progressively increased dispersion with increased IR strength and increased dose reduction. After standardization, the histograms appeared deviated and ragged between FBP images and IR images with strength 3 or 5, but almost normally-distributed between FBP images and IR images with strength 1. Conclusion The IR algorithm may be used to save radiation doses without substantial image quality degradation in chest CT scanning of the adult anthropomorphic phantom, down to approximately 1.4 mGy in CTDIvol (60% reduced dose). PMID:29354008

  10. Evaluation of an analytic linear Boltzmann transport equation solver for high-density inhomogeneities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lloyd, S. A. M.; Ansbacher, W.; Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6

    2013-01-15

    Purpose: Acuros external beam (Acuros XB) is a novel dose calculation algorithm implemented through the ECLIPSE treatment planning system. The algorithm finds a deterministic solution to the linear Boltzmann transport equation, the same equation commonly solved stochastically by Monte Carlo methods. This work is an evaluation of Acuros XB, by comparison with Monte Carlo, for dose calculation applications involving high-density materials. Existing non-Monte Carlo clinical dose calculation algorithms, such as the analytic anisotropic algorithm (AAA), do not accurately model dose perturbations due to increased electron scatter within high-density volumes. Methods: Acuros XB, AAA, and EGSnrc based Monte Carlo are usedmore » to calculate dose distributions from 18 MV and 6 MV photon beams delivered to a cubic water phantom containing a rectangular high density (4.0-8.0 g/cm{sup 3}) volume at its center. The algorithms are also used to recalculate a clinical prostate treatment plan involving a unilateral hip prosthesis, originally evaluated using AAA. These results are compared graphically and numerically using gamma-index analysis. Radio-chromic film measurements are presented to augment Monte Carlo and Acuros XB dose perturbation data. Results: Using a 2% and 1 mm gamma-analysis, between 91.3% and 96.8% of Acuros XB dose voxels containing greater than 50% the normalized dose were in agreement with Monte Carlo data for virtual phantoms involving 18 MV and 6 MV photons, stainless steel and titanium alloy implants and for on-axis and oblique field delivery. A similar gamma-analysis of AAA against Monte Carlo data showed between 80.8% and 87.3% agreement. Comparing Acuros XB and AAA evaluations of a clinical prostate patient plan involving a unilateral hip prosthesis, Acuros XB showed good overall agreement with Monte Carlo while AAA underestimated dose on the upstream medial surface of the prosthesis due to electron scatter from the high-density material. Film measurements support the dose perturbations demonstrated by Monte Carlo and Acuros XB data. Conclusions: Acuros XB is shown to perform as well as Monte Carlo methods and better than existing clinical algorithms for dose calculations involving high-density volumes.« less

  11. The effect of ketamine on optical and electrical characteristics of spreading depolarizations in gyrencephalic swine cortex.

    PubMed

    Sánchez-Porras, Renán; Santos, Edgar; Schöll, Michael; Stock, Christian; Zheng, Zelong; Schiebel, Patrick; Orakcioglu, Berk; Unterberg, Andreas W; Sakowitz, Oliver W

    2014-09-01

    Spreading depolarization (SD) is a wave of mass neuronal and glial depolarization that propagates across the cerebral cortex and has been implicated in the pathophysiology of brain injury states and migraine with aura. Analgesics and sedatives seem to have a significant effect on SD modulation. Studies have shown that ketamine, an NMDA receptor blocker, has the capacity to influence SD occurrence. The aim of this study was to analyze the dose-dependent effect of ketamine on SD susceptibility through electrocorticography (ECoG) and intrinsic optical signal (IOS) imaging in a gyrencephalic brain. Ketamine in a low-dose infusion (2 mg/kg/h) decreases SD spread and had an effect on the amplitude of SD deflections, as well as on duration, and speed. Moreover, during ketamine infusion at this dose, there was a sustained decrease in the hyperemic response following SD. However, a high-dose infusion (4 mg/kg/h) of ketamine inhibited SD induction and expansion. Furthermore, a high-dose bolus (4 mg/kg), 1 min after stimulation, blocked SD propagation abruptly within 1-2 min, and hindered SD induction and expansion for the following 15-30 min. The results suggest that ketamine may be therapeutically beneficial in preventing SDs. Nonetheless, an adequate dosage and way of administration should be considered and established for human use. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure.

    PubMed

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-05-01

    Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.

  13. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maier, Joscha, E-mail: joscha.maier@dkfz.de; Sawall, Stefan; Kachelrieß, Marc

    2014-05-15

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levelsmore » from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. Conclusions: LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.« less

  14. Online Performance-Improvement Algorithms

    DTIC Science & Technology

    1994-08-01

    fault rate as the request sequence length approaches infinity. Their algorithms are based on an innovative use of the classical Ziv - Lempel [85] data ...Report CS-TR-348-91. [85] J. Ziv and A. Lempel . Compression of individual sequences via variable-rate coding. IEEE Trans. Inf. Theory, 24:530-53`, 1978. 94...Deferred Data Structuring Recall that our incremental multi-trip algorithm spreads the building of the fence-tree over several trips in order to

  15. Wireless rake-receiver using adaptive filter with a family of partial update algorithms in noise cancellation applications

    NASA Astrophysics Data System (ADS)

    Fayadh, Rashid A.; Malek, F.; Fadhil, Hilal A.; Aldhaibani, Jaafar A.; Salman, M. K.; Abdullah, Farah Salwani

    2015-05-01

    For high data rate propagation in wireless ultra-wideband (UWB) communication systems, the inter-symbol interference (ISI), multiple-access interference (MAI), and multiple-users interference (MUI) are influencing the performance of the wireless systems. In this paper, the rake-receiver was presented with the spread signal by direct sequence spread spectrum (DS-SS) technique. The adaptive rake-receiver structure was shown with adjusting the receiver tap weights using least mean squares (LMS), normalized least mean squares (NLMS), and affine projection algorithms (APA) to support the weak signals by noise cancellation and mitigate the interferences. To minimize the data convergence speed and to reduce the computational complexity by the previous algorithms, a well-known approach of partial-updates (PU) adaptive filters were employed with algorithms, such as sequential-partial, periodic-partial, M-max-partial, and selective-partial updates (SPU) in the proposed system. The simulation results of bit error rate (BER) versus signal-to-noise ratio (SNR) are illustrated to show the performance of partial-update algorithms that have nearly comparable performance with the full update adaptive filters. Furthermore, the SPU-partial has closed performance to the full-NLMS and full-APA while the M-max-partial has closed performance to the full-LMS updates algorithms.

  16. Monte Carlo evaluation of Acuros XB dose calculation Algorithm for intensity modulated radiation therapy of nasopharyngeal carcinoma

    NASA Astrophysics Data System (ADS)

    Yeh, Peter C. Y.; Lee, C. C.; Chao, T. C.; Tung, C. J.

    2017-11-01

    Intensity-modulated radiation therapy is an effective treatment modality for the nasopharyngeal carcinoma. One important aspect of this cancer treatment is the need to have an accurate dose algorithm dealing with the complex air/bone/tissue interface in the head-neck region to achieve the cure without radiation-induced toxicities. The Acuros XB algorithm explicitly solves the linear Boltzmann transport equation in voxelized volumes to account for the tissue heterogeneities such as lungs, bone, air, and soft tissues in the treatment field receiving radiotherapy. With the single beam setup in phantoms, this algorithm has already been demonstrated to achieve the comparable accuracy with Monte Carlo simulations. In the present study, five nasopharyngeal carcinoma patients treated with the intensity-modulated radiation therapy were examined for their dose distributions calculated using the Acuros XB in the planning target volume and the organ-at-risk. Corresponding results of Monte Carlo simulations were computed from the electronic portal image data and the BEAMnrc/DOSXYZnrc code. Analysis of dose distributions in terms of the clinical indices indicated that the Acuros XB was in comparable accuracy with Monte Carlo simulations and better than the anisotropic analytical algorithm for dose calculations in real patients.

  17. Comparison of Nine Statistical Model Based Warfarin Pharmacogenetic Dosing Algorithms Using the Racially Diverse International Warfarin Pharmacogenetic Consortium Cohort Database

    PubMed Central

    Liu, Rong; Li, Xi; Zhang, Wei; Zhou, Hong-Hao

    2015-01-01

    Objective Multiple linear regression (MLR) and machine learning techniques in pharmacogenetic algorithm-based warfarin dosing have been reported. However, performances of these algorithms in racially diverse group have never been objectively evaluated and compared. In this literature-based study, we compared the performances of eight machine learning techniques with those of MLR in a large, racially-diverse cohort. Methods MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied in warfarin dose algorithms in a cohort from the International Warfarin Pharmacogenetics Consortium database. Covariates obtained by stepwise regression from 80% of randomly selected patients were used to develop algorithms. To compare the performances of these algorithms, the mean percentage of patients whose predicted dose fell within 20% of the actual dose (mean percentage within 20%) and the mean absolute error (MAE) were calculated in the remaining 20% of patients. The performances of these techniques in different races, as well as the dose ranges of therapeutic warfarin were compared. Robust results were obtained after 100 rounds of resampling. Results BART, MARS and SVR were statistically indistinguishable and significantly out performed all the other approaches in the whole cohort (MAE: 8.84–8.96 mg/week, mean percentage within 20%: 45.88%–46.35%). In the White population, MARS and BART showed higher mean percentage within 20% and lower mean MAE than those of MLR (all p values < 0.05). In the Asian population, SVR, BART, MARS and LAR performed the same as MLR. MLR and LAR optimally performed among the Black population. When patients were grouped in terms of warfarin dose range, all machine learning techniques except ANN and LAR showed significantly higher mean percentage within 20%, and lower MAE (all p values < 0.05) than MLR in the low- and high- dose ranges. Conclusion Overall, machine learning-based techniques, BART, MARS and SVR performed superior than MLR in warfarin pharmacogenetic dosing. Differences of algorithms’ performances exist among the races. Moreover, machine learning-based algorithms tended to perform better in the low- and high- dose ranges than MLR. PMID:26305568

  18. The impact of low-Z and high-Z metal implants in IMRT: A Monte Carlo study of dose inaccuracies in commercial dose algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spadea, Maria Francesca, E-mail: mfspadea@unicz.it; Verburg, Joost Mathias; Seco, Joao

    2014-01-15

    Purpose: The aim of the study was to evaluate the dosimetric impact of low-Z and high-Z metallic implants on IMRT plans. Methods: Computed tomography (CT) scans of three patients were analyzed to study effects due to the presence of Titanium (low-Z), Platinum and Gold (high-Z) inserts. To eliminate artifacts in CT images, a sinogram-based metal artifact reduction algorithm was applied. IMRT dose calculations were performed on both the uncorrected and corrected images using a commercial planning system (convolution/superposition algorithm) and an in-house Monte Carlo platform. Dose differences between uncorrected and corrected datasets were computed and analyzed using gamma index (Pγ{submore » <1}) and setting 2 mm and 2% as distance to agreement and dose difference criteria, respectively. Beam specific depth dose profiles across the metal were also examined. Results: Dose discrepancies between corrected and uncorrected datasets were not significant for low-Z material. High-Z materials caused under-dosage of 20%–25% in the region surrounding the metal and over dosage of 10%–15% downstream of the hardware. Gamma index test yielded Pγ{sub <1}>99% for all low-Z cases; while for high-Z cases it returned 91% < Pγ{sub <1}< 99%. Analysis of the depth dose curve of a single beam for low-Z cases revealed that, although the dose attenuation is altered inside the metal, it does not differ downstream of the insert. However, for high-Z metal implants the dose is increased up to 10%–12% around the insert. In addition, Monte Carlo method was more sensitive to the presence of metal inserts than superposition/convolution algorithm. Conclusions: The reduction in terms of dose of metal artifacts in CT images is relevant for high-Z implants. In this case, dose distribution should be calculated using Monte Carlo algorithms, given their superior accuracy in dose modeling in and around the metal. In addition, the knowledge of the composition of metal inserts improves the accuracy of the Monte Carlo dose calculation significantly.« less

  19. TU-G-204-09: The Effects of Reduced- Dose Lung Cancer Screening CT On Lung Nodule Detection Using a CAD Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, S; Lo, P; Kim, G

    2015-06-15

    Purpose: While Lung Cancer Screening CT is being performed at low doses, the purpose of this study was to investigate the effects of further reducing dose on the performance of a CAD nodule-detection algorithm. Methods: We selected 50 cases from our local database of National Lung Screening Trial (NLST) patients for which we had both the image series and the raw CT data from the original scans. All scans were acquired with fixed mAs (25 for standard-sized patients, 40 for large patients) on a 64-slice scanner (Sensation 64, Siemens Healthcare). All images were reconstructed with 1-mm slice thickness, B50 kernel.more » 10 of the cases had at least one nodule reported on the NLST reader forms. Based on a previously-published technique, we added noise to the raw data to simulate reduced-dose versions of each case at 50% and 25% of the original NLST dose (i.e. approximately 1.0 and 0.5 mGy CTDIvol). For each case at each dose level, the CAD detection algorithm was run and nodules greater than 4 mm in diameter were reported. These CAD results were compared to “truth”, defined as the approximate nodule centroids from the NLST reports. Subject-level mean sensitivities and false-positive rates were calculated for each dose level. Results: The mean sensitivities of the CAD algorithm were 35% at the original dose, 20% at 50% dose, and 42.5% at 25% dose. The false-positive rates, in decreasing-dose order, were 3.7, 2.9, and 10 per case. In certain cases, particularly in larger patients, there were severe photon-starvation artifacts, especially in the apical region due to the high-attenuating shoulders. Conclusion: The detection task was challenging for the CAD algorithm at all dose levels, including the original NLST dose. However, the false-positive rate at 25% dose approximately tripled, suggesting a loss of CAD robustness somewhere between 0.5 and 1.0 mGy. NCI grant U01 CA181156 (Quantitative Imaging Network); Tobacco Related Disease Research Project grant 22RT-0131.« less

  20. SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Tian, Z; Song, T

    Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accountingmore » for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.« less

  1. National dosimetric audit network finds discrepancies in AAA lung inhomogeneity corrections.

    PubMed

    Dunn, Leon; Lehmann, Joerg; Lye, Jessica; Kenny, John; Kron, Tomas; Alves, Andrew; Cole, Andrew; Zifodya, Jackson; Williams, Ivan

    2015-07-01

    This work presents the Australian Clinical Dosimetry Service's (ACDS) findings of an investigation of systematic discrepancies between treatment planning system (TPS) calculated and measured audit doses. Specifically, a comparison between the Anisotropic Analytic Algorithm (AAA) and other common dose-calculation algorithms in regions downstream (≥2cm) from low-density material in anthropomorphic and slab phantom geometries is presented. Two measurement setups involving rectilinear slab-phantoms (ACDS Level II audit) and anthropomorphic geometries (ACDS Level III audit) were used in conjunction with ion chamber (planar 2D array and Farmer-type) measurements. Measured doses were compared to calculated doses for a variety of cases, with and without the presence of inhomogeneities and beam-modifiers in 71 audits. Results demonstrate a systematic AAA underdose with an average discrepancy of 2.9 ± 1.2% when the AAA algorithm is implemented in regions distal from lung-tissue interfaces, when lateral beams are used with anthropomorphic phantoms. This systemic discrepancy was found for all Level III audits of facilities using the AAA algorithm. This discrepancy is not seen when identical measurements are compared for other common dose-calculation algorithms (average discrepancy -0.4 ± 1.7%), including the Acuros XB algorithm also available with the Eclipse TPS. For slab phantom geometries (Level II audits), with similar measurement points downstream from inhomogeneities this discrepancy is also not seen. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  2. Single-dose volume regulation algorithm for a gas-compensated intrathecal infusion pump.

    PubMed

    Nam, Kyoung Won; Kim, Kwang Gi; Sung, Mun Hyun; Choi, Seong Wook; Kim, Dae Hyun; Jo, Yung Ho

    2011-01-01

    The internal pressures of medication reservoirs of gas-compensated intrathecal medication infusion pumps decrease when medication is discharged, and these discharge-induced pressure drops can decrease the volume of medication discharged. To prevent these reductions, the volumes discharged must be adjusted to maintain the required dosage levels. In this study, the authors developed an automatic control algorithm for an intrathecal infusion pump developed by the Korean National Cancer Center that regulates single-dose volumes. The proposed algorithm estimates the amount of medication remaining and adjusts control parameters automatically to maintain single-dose volumes at predetermined levels. Experimental results demonstrated that the proposed algorithm can regulate mean single-dose volumes with a variation of <3% and estimate the remaining medication volume with an accuracy of >98%. © 2010, Copyright the Authors. Artificial Organs © 2010, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  3. A simplified analytical random walk model for proton dose calculation

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    We propose an analytical random walk model for proton dose calculation in a laterally homogeneous medium. A formula for the spatial fluence distribution of primary protons is derived. The variance of the spatial distribution is in the form of a distance-squared law of the angular distribution. To improve the accuracy of dose calculation in the Bragg peak region, the energy spectrum of the protons is used. The accuracy is validated against Monte Carlo simulation in water phantoms with either air gaps or a slab of bone inserted. The algorithm accurately reflects the dose dependence on the depth of the bone and can deal with small-field dosimetry. We further applied the algorithm to patients’ cases in the highly heterogeneous head and pelvis sites and used a gamma test to show the reasonable accuracy of the algorithm in these sites. Our algorithm is fast for clinical use.

  4. Comparison of optimization algorithms in intensity-modulated radiation therapy planning

    NASA Astrophysics Data System (ADS)

    Kendrick, Rachel

    Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.

  5. Efficient implementation of the 3D-DDA ray traversal algorithm on GPU and its application in radiation dose calculation.

    PubMed

    Xiao, Kai; Chen, Danny Z; Hu, X Sharon; Zhou, Bo

    2012-12-01

    The three-dimensional digital differential analyzer (3D-DDA) algorithm is a widely used ray traversal method, which is also at the core of many convolution∕superposition (C∕S) dose calculation approaches. However, porting existing C∕S dose calculation methods onto graphics processing unit (GPU) has brought challenges to retaining the efficiency of this algorithm. In particular, straightforward implementation of the original 3D-DDA algorithm inflicts a lot of branch divergence which conflicts with the GPU programming model and leads to suboptimal performance. In this paper, an efficient GPU implementation of the 3D-DDA algorithm is proposed, which effectively reduces such branch divergence and improves performance of the C∕S dose calculation programs running on GPU. The main idea of the proposed method is to convert a number of conditional statements in the original 3D-DDA algorithm into a set of simple operations (e.g., arithmetic, comparison, and logic) which are better supported by the GPU architecture. To verify and demonstrate the performance improvement, this ray traversal method was integrated into a GPU-based collapsed cone convolution∕superposition (CCCS) dose calculation program. The proposed method has been tested using a water phantom and various clinical cases on an NVIDIA GTX570 GPU. The CCCS dose calculation program based on the efficient 3D-DDA ray traversal implementation runs 1.42 ∼ 2.67× faster than the one based on the original 3D-DDA implementation, without losing any accuracy. The results show that the proposed method can effectively reduce branch divergence in the original 3D-DDA ray traversal algorithm and improve the performance of the CCCS program running on GPU. Considering the wide utilization of the 3D-DDA algorithm, various applications can benefit from this implementation method.

  6. Radiation dose reduction using a neck detection algorithm for single spiral brain and cervical spine CT acquisition in the trauma setting.

    PubMed

    Ardley, Nicholas D; Lau, Ken K; Buchan, Kevin

    2013-12-01

    Cervical spine injuries occur in 4-8 % of adults with head trauma. Dual acquisition technique has been traditionally used for the CT scanning of brain and cervical spine. The purpose of this study was to determine the efficacy of radiation dose reduction by using a single acquisition technique that incorporated both anatomical regions with a dedicated neck detection algorithm. Thirty trauma patients for brain and cervical spine CT were included and were scanned with the single acquisition technique. The radiation doses from the single CT acquisition technique with the neck detection algorithm, which allowed appropriate independent dose administration relevant to brain and cervical spine regions, were recorded. Comparison was made both to the doses calculated from the simulation of the traditional dual acquisitions with matching parameters, and to the doses of retrospective dual acquisition legacy technique with the same sample size. The mean simulated dose for the traditional dual acquisition technique was 3.99 mSv, comparable to the average dose of 4.2 mSv from 30 previous patients who had CT of brain and cervical spine as dual acquisitions. The mean dose from the single acquisition technique was 3.35 mSv, resulting in a 16 % overall dose reduction. The images from the single acquisition technique were of excellent diagnostic quality. The new single acquisition CT technique incorporating the neck detection algorithm for brain and cervical spine significantly reduces the overall radiation dose by eliminating the unavoidable overlapping range between 2 anatomical regions which occurs with the traditional dual acquisition technique.

  7. SU-E-T-33: A Feasibility-Seeking Algorithm Applied to Planning of Intensity Modulated Proton Therapy: A Proof of Principle Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Penfold, S; Casiraghi, M; Dou, T

    2015-06-15

    Purpose: To investigate the applicability of feasibility-seeking cyclic orthogonal projections to the field of intensity modulated proton therapy (IMPT) inverse planning. Feasibility of constraints only, as opposed to optimization of a merit function, is less demanding algorithmically and holds a promise of parallel computations capability with non-cyclic orthogonal projections algorithms such as string-averaging or block-iterative strategies. Methods: A virtual 2D geometry was designed containing a C-shaped planning target volume (PTV) surrounding an organ at risk (OAR). The geometry was pixelized into 1 mm pixels. Four beams containing a subset of proton pencil beams were simulated in Geant4 to provide themore » system matrix A whose elements a-ij correspond to the dose delivered to pixel i by a unit intensity pencil beam j. A cyclic orthogonal projections algorithm was applied with the goal of finding a pencil beam intensity distribution that would meet the following dose requirements: D-OAR < 54 Gy and 57 Gy < D-PTV < 64.2 Gy. The cyclic algorithm was based on the concept of orthogonal projections onto half-spaces according to the Agmon-Motzkin-Schoenberg algorithm, also known as ‘ART for inequalities’. Results: The cyclic orthogonal projections algorithm resulted in less than 5% of the PTV pixels and less than 1% of OAR pixels violating their dose constraints, respectively. Because of the abutting OAR-PTV geometry and the realistic modelling of the pencil beam penumbra, complete satisfaction of the dose objectives was not achieved, although this would be a clinically acceptable plan for a meningioma abutting the brainstem, for example. Conclusion: The cyclic orthogonal projections algorithm was demonstrated to be an effective tool for inverse IMPT planning in the 2D test geometry described. We plan to further develop this linear algorithm to be capable of incorporating dose-volume constraints into the feasibility-seeking algorithm.« less

  8. Thermal particle image velocity estimation of fire plume flow

    Treesearch

    Xiangyang Zhou; Lulu Sun; Shankar Mahalingam; David R. Weise

    2003-01-01

    For the purpose of studying wildfire spread in living vegetation such as chaparral in California, a thermal particle image velocity (TPIV) algorithm for nonintrusively measuring flame gas velocities through thermal infrared (IR) imagery was developed. By tracing thermal particles in successive digital IR images, the TPIV algorithm can estimate the velocity field in a...

  9. Validation of a track repeating algorithm for intensity modulated proton therapy: clinical cases study

    NASA Astrophysics Data System (ADS)

    Yepes, Pablo P.; Eley, John G.; Liu, Amy; Mirkovic, Dragan; Randeniya, Sharmalee; Titt, Uwe; Mohan, Radhe

    2016-04-01

    Monte Carlo (MC) methods are acknowledged as the most accurate technique to calculate dose distributions. However, due its lengthy calculation times, they are difficult to utilize in the clinic or for large retrospective studies. Track-repeating algorithms, based on MC-generated particle track data in water, accelerate dose calculations substantially, while essentially preserving the accuracy of MC. In this study, we present the validation of an efficient dose calculation algorithm for intensity modulated proton therapy, the fast dose calculator (FDC), based on a track-repeating technique. We validated the FDC algorithm for 23 patients, which included 7 brain, 6 head-and-neck, 5 lung, 1 spine, 1 pelvis and 3 prostate cases. For validation, we compared FDC-generated dose distributions with those from a full-fledged Monte Carlo based on GEANT4 (G4). We compared dose-volume-histograms, 3D-gamma-indices and analyzed a series of dosimetric indices. More than 99% of the voxels in the voxelized phantoms describing the patients have a gamma-index smaller than unity for the 2%/2 mm criteria. In addition the difference relative to the prescribed dose between the dosimetric indices calculated with FDC and G4 is less than 1%. FDC reduces the calculation times from 5 ms per proton to around 5 μs.

  10. SU-E-T-465: Dose Calculation Method for Dynamic Tumor Tracking Using a Gimbal-Mounted Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugimoto, S; Inoue, T; Kurokawa, C

    Purpose: Dynamic tumor tracking using the gimbal-mounted linac (Vero4DRT, Mitsubishi Heavy Industries, Ltd., Japan) has been available when respiratory motion is significant. The irradiation accuracy of the dynamic tumor tracking has been reported to be excellent. In addition to the irradiation accuracy, a fast and accurate dose calculation algorithm is needed to validate the dose distribution in the presence of respiratory motion because the multiple phases of it have to be considered. A modification of dose calculation algorithm is necessary for the gimbal-mounted linac due to the degrees of freedom of gimbal swing. The dose calculation algorithm for the gimbalmore » motion was implemented using the linear transformation between coordinate systems. Methods: The linear transformation matrices between the coordinate systems with and without gimbal swings were constructed using the combination of translation and rotation matrices. The coordinate system where the radiation source is at the origin and the beam axis along the z axis was adopted. The transformation can be divided into the translation from the radiation source to the gimbal rotation center, the two rotations around the center relating to the gimbal swings, and the translation from the gimbal center to the radiation source. After operating the transformation matrix to the phantom or patient image, the dose calculation can be performed as the no gimbal swing. The algorithm was implemented in the treatment planning system, PlanUNC (University of North Carolina, NC). The convolution/superposition algorithm was used. The dose calculations with and without gimbal swings were performed for the 3 × 3 cm{sup 2} field with the grid size of 5 mm. Results: The calculation time was about 3 minutes per beam. No significant additional time due to the gimbal swing was observed. Conclusions: The dose calculation algorithm for the finite gimbal swing was implemented. The calculation time was moderate.« less

  11. Priori mask guided image reconstruction (p-MGIR) for ultra-low dose cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Kahler, Darren L.; Liu, Chihray; Lu, Bo

    2015-11-01

    Recently, the compressed sensing (CS) based iterative reconstruction method has received attention because of its ability to reconstruct cone beam computed tomography (CBCT) images with good quality using sparsely sampled or noisy projections, thus enabling dose reduction. However, some challenges remain. In particular, there is always a tradeoff between image resolution and noise/streak artifact reduction based on the amount of regularization weighting that is applied uniformly across the CBCT volume. The purpose of this study is to develop a novel low-dose CBCT reconstruction algorithm framework called priori mask guided image reconstruction (p-MGIR) that allows reconstruction of high-quality low-dose CBCT images while preserving the image resolution. In p-MGIR, the unknown CBCT volume was mathematically modeled as a combination of two regions: (1) where anatomical structures are complex, and (2) where intensities are relatively uniform. The priori mask, which is the key concept of the p-MGIR algorithm, was defined as the matrix that distinguishes between the two separate CBCT regions where the resolution needs to be preserved and where streak or noise needs to be suppressed. We then alternately updated each part of image by solving two sub-minimization problems iteratively, where one minimization was focused on preserving the edge information of the first part while the other concentrated on the removal of noise/artifacts from the latter part. To evaluate the performance of the p-MGIR algorithm, a numerical head-and-neck phantom, a Catphan 600 physical phantom, and a clinical head-and-neck cancer case were used for analysis. The results were compared with the standard Feldkamp-Davis-Kress as well as conventional CS-based algorithms. Examination of the p-MGIR algorithm showed that high-quality low-dose CBCT images can be reconstructed without compromising the image resolution. For both phantom and the patient cases, the p-MGIR is able to achieve a clinically-reasonable image with 60 projections. Therefore, a clinically-viable, high-resolution head-and-neck CBCT image can be obtained while cutting the dose by 83%. Moreover, the image quality obtained using p-MGIR is better than the quality obtained using other algorithms. In this work, we propose a novel low-dose CBCT reconstruction algorithm called p-MGIR. It can be potentially used as a CBCT reconstruction algorithm with low dose scan requests

  12. Multiple alignment analysis on phylogenetic tree of the spread of SARS epidemic using distance method

    NASA Astrophysics Data System (ADS)

    Amiroch, S.; Pradana, M. S.; Irawan, M. I.; Mukhlash, I.

    2017-09-01

    Multiple Alignment (MA) is a particularly important tool for studying the viral genome and determine the evolutionary process of the specific virus. Application of MA in the case of the spread of the Severe acute respiratory syndrome (SARS) epidemic is an interesting thing because this virus epidemic a few years ago spread so quickly that medical attention in many countries. Although there has been a lot of software to process multiple sequences, but the use of pairwise alignment to process MA is very important to consider. In previous research, the alignment between the sequences to process MA algorithm, Super Pairwise Alignment, but in this study used a dynamic programming algorithm Needleman wunchs simulated in Matlab. From the analysis of MA obtained and stable region and unstable which indicates the position where the mutation occurs, the system network topology that produced the phylogenetic tree of the SARS epidemic distance method, and system area networks mutation.

  13. 40 CFR Appendix A to Part 197 - Calculation of Annual Committed Effective Dose Equivalent

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... resulting from irradiation of the tissue or organ to the total risk when the whole body is irradiated... internal irradiation from incorporated radionuclides, the total absorbed dose will be spread out in time...

  14. 40 CFR Appendix A to Part 197 - Calculation of Annual Committed Effective Dose Equivalent

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... resulting from irradiation of the tissue or organ to the total risk when the whole body is irradiated... internal irradiation from incorporated radionuclides, the total absorbed dose will be spread out in time...

  15. 40 CFR Appendix A to Part 197 - Calculation of Annual Committed Effective Dose Equivalent

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... resulting from irradiation of the tissue or organ to the total risk when the whole body is irradiated... internal irradiation from incorporated radionuclides, the total absorbed dose will be spread out in time...

  16. 40 CFR Appendix A to Part 197 - Calculation of Annual Committed Effective Dose Equivalent

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... resulting from irradiation of the tissue or organ to the total risk when the whole body is irradiated... internal irradiation from incorporated radionuclides, the total absorbed dose will be spread out in time...

  17. 40 CFR Appendix A to Part 197 - Calculation of Annual Committed Effective Dose Equivalent

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... resulting from irradiation of the tissue or organ to the total risk when the whole body is irradiated... internal irradiation from incorporated radionuclides, the total absorbed dose will be spread out in time...

  18. Predicting warfarin dosage in European–Americans and African–Americans using DNA samples linked to an electronic health record

    PubMed Central

    Ramirez, Andrea H; Shi, Yaping; Schildcrout, Jonathan S; Delaney, Jessica T; Xu, Hua; Oetjens, Matthew T; Zuvich, Rebecca L; Basford, Melissa A; Bowton, Erica; Jiang, Min; Speltz, Peter; Zink, Raquel; Cowan, James; Pulley, Jill M; Ritchie, Marylyn D; Masys, Daniel R; Roden, Dan M; Crawford, Dana C; Denny, Joshua C

    2012-01-01

    Aim Warfarin pharmacogenomic algorithms reduce dosing error, but perform poorly in non-European–Americans. Electronic health record (EHR) systems linked to biobanks may allow for pharmacogenomic analysis, but they have not yet been used for this purpose. Patients & methods We used BioVU, the Vanderbilt EHR-linked DNA repository, to identify European–Americans (n = 1022) and African–Americans (n = 145) on stable warfarin therapy and evaluated the effect of 15 pharmacogenetic variants on stable warfarin dose. Results Associations between variants in VKORC1, CYP2C9 and CYP4F2 with weekly dose were observed in European–Americans as well as additional variants in CYP2C9 and CALU in African–Americans. Compared with traditional 5 mg/day dosing, implementing the US FDA recommendations or the International Warfarin Pharmacogenomics Consortium (IWPC) algorithm reduced error in weekly dose in European–Americans (13.5–12.4 and 9.5 mg/week, respectively) but less so in African–Americans (15.2–15.0 and 13.8 mg/week, respectively). By further incorporating associated variants specific for European–Americans and African–Americans in an expanded algorithm, dose-prediction error reduced to 9.1 mg/week (95% CI: 8.4–9.6) in European–Americans and 12.4 mg/week (95% CI: 10.0–13.2) in African–Americans. The expanded algorithm explained 41 and 53% of dose variation in African–Americans and European–Americans, respectively, compared with 29 and 50%, respectively, for the IWPC algorithm. Implementing these predictions via dispensable pill regimens similarly reduced dosing error. Conclusion These results validate EHR-linked DNA biorepositories as real-world resources for pharmacogenomic validation and discovery. PMID:22329724

  19. SU-E-T-626: Accuracy of Dose Calculation Algorithms in MultiPlan Treatment Planning System in Presence of Heterogeneities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moignier, C; Huet, C; Barraux, V

    Purpose: Advanced stereotactic radiotherapy (SRT) treatments require accurate dose calculation for treatment planning especially for treatment sites involving heterogeneous patient anatomy. The purpose of this study was to evaluate the accuracy of dose calculation algorithms, Raytracing and Monte Carlo (MC), implemented in the MultiPlan treatment planning system (TPS) in presence of heterogeneities. Methods: First, the LINAC of a CyberKnife radiotherapy facility was modeled with the PENELOPE MC code. A protocol for the measurement of dose distributions with EBT3 films was established and validated thanks to comparison between experimental dose distributions and calculated dose distributions obtained with MultiPlan Raytracing and MCmore » algorithms as well as with the PENELOPE MC model for treatments planned with the homogenous Easycube phantom. Finally, bones and lungs inserts were used to set up a heterogeneous Easycube phantom. Treatment plans with the 10, 7.5 or the 5 mm field sizes were generated in Multiplan TPS with different tumor localizations (in the lung and at the lung/bone/soft tissue interface). Experimental dose distributions were compared to the PENELOPE MC and Multiplan calculations using the gamma index method. Results: Regarding the experiment in the homogenous phantom, 100% of the points passed for the 3%/3mm tolerance criteria. These criteria include the global error of the method (CT-scan resolution, EBT3 dosimetry, LINAC positionning …), and were used afterwards to estimate the accuracy of the MultiPlan algorithms in heterogeneous media. Comparison of the dose distributions obtained in the heterogeneous phantom is in progress. Conclusion: This work has led to the development of numerical and experimental dosimetric tools for small beam dosimetry. Raytracing and MC algorithms implemented in MultiPlan TPS were evaluated in heterogeneous media.« less

  20. SU-F-SPS-11: The Dosimetric Comparison of Truebeam 2.0 and Cyberknife M6 Treatment Plans for Brain SRS Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mabhouti, H; Sanli, E; Cebe, M

    Purpose: Brain stereotactic radiosurgery involves the use of precisely directed, single session radiation to create a desired radiobiologic response within the brain target with acceptable minimal effects on surrounding structures or tissues. In this study, the dosimetric comparison of Truebeam 2.0 and Cyberknife M6 treatment plans were made. Methods: For Truebeam 2.0 machine, treatment planning were done using 2 full arc VMAT technique with 6 FFF beam on the CT scan of Randophantom simulating the treatment of sterotactic treatments for one brain metastasis. The dose distribution were calculated using Eclipse treatment planning system with Acuros XB algorithm. The treatment planningmore » of the same target were also done for Cyberknife M6 machine with Multiplan treatment planning system using Monte Carlo algorithm. Using the same film batch, the net OD to dose calibration curve was obtained using both machine by delivering 0- 800 cGy. Films were scanned 48 hours after irradiation using an Epson 1000XL flatbed scanner. Dose distribution were measured using EBT3 film dosimeter. The measured and calculated doses were compared. Results: The dose distribution in the target and 2 cm beyond the target edge were calculated on TPSs and measured using EBT3 film. For cyberknife plans, the gamma analysis passing rates between measured and calculated dose distributions were 99.2% and 96.7% for target and peripheral region of target respectively. For Truebeam plans, the gamma analysis passing rates were 99.1% and 95.5% for target and peripheral region of target respectively. Conclusion: Although, target dose distribution calculated accurately by Acuros XB and Monte Carlo algorithms, Monte carlo calculation algorithm predicts dose distribution around the peripheral region of target more accurately than Acuros algorithm.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kieselmann, J; Bartzsch, S; Oelfke, U

    Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less

  2. Spreading to localized targets in complex networks

    NASA Astrophysics Data System (ADS)

    Sun, Ye; Ma, Long; Zeng, An; Wang, Wen-Xu

    2016-12-01

    As an important type of dynamics on complex networks, spreading is widely used to model many real processes such as the epidemic contagion and information propagation. One of the most significant research questions in spreading is to rank the spreading ability of nodes in the network. To this end, substantial effort has been made and a variety of effective methods have been proposed. These methods usually define the spreading ability of a node as the number of finally infected nodes given that the spreading is initialized from the node. However, in many real cases such as advertising and news propagation, the spreading only aims to cover a specific group of nodes. Therefore, it is necessary to study the spreading ability of nodes towards localized targets in complex networks. In this paper, we propose a reversed local path algorithm for this problem. Simulation results show that our method outperforms the existing methods in identifying the influential nodes with respect to these localized targets. Moreover, the influential spreaders identified by our method can effectively avoid infecting the non-target nodes in the spreading process.

  3. High-dose-rate prostate brachytherapy inverse planning on dose-volume criteria by simulated annealing.

    PubMed

    Deist, T M; Gorissen, B L

    2016-02-07

    High-dose-rate brachytherapy is a tumor treatment method where a highly radioactive source is brought in close proximity to the tumor. In this paper we develop a simulated annealing algorithm to optimize the dwell times at preselected dwell positions to maximize tumor coverage under dose-volume constraints on the organs at risk. Compared to existing algorithms, our algorithm has advantages in terms of speed and objective value and does not require an expensive general purpose solver. Its success mainly depends on exploiting the efficiency of matrix multiplication and a careful selection of the neighboring states. In this paper we outline its details and make an in-depth comparison with existing methods using real patient data.

  4. An algorithm for intelligent sorting of CT-related dose parameters.

    PubMed

    Cook, Tessa S; Zimmerman, Stefan L; Steingall, Scott R; Boonn, William W; Kim, Woojin

    2012-02-01

    Imaging centers nationwide are seeking innovative means to record and monitor computed tomography (CT)-related radiation dose in light of multiple instances of patient overexposure to medical radiation. As a solution, we have developed RADIANCE, an automated pipeline for extraction, archival, and reporting of CT-related dose parameters. Estimation of whole-body effective dose from CT dose length product (DLP)--an indirect estimate of radiation dose--requires anatomy-specific conversion factors that cannot be applied to total DLP, but instead necessitate individual anatomy-based DLPs. A challenge exists because the total DLP reported on a dose sheet often includes multiple separate examinations (e.g., chest CT followed by abdominopelvic CT). Furthermore, the individual reported series DLPs may not be clearly or consistently labeled. For example, "arterial" could refer to the arterial phase of the triple liver CT or the arterial phase of a CT angiogram. To address this problem, we have designed an intelligent algorithm to parse dose sheets for multi-series CT examinations and correctly separate the total DLP into its anatomic components. The algorithm uses information from the departmental PACS to determine how many distinct CT examinations were concurrently performed. Then, it matches the number of distinct accession numbers to the series that were acquired and anatomically matches individual series DLPs to their appropriate CT examinations. This algorithm allows for more accurate dose analytics, but there remain instances where automatic sorting is not feasible. To ultimately improve radiology patient care, we must standardize series names and exam names to unequivocally sort exams by anatomy and correctly estimate whole-body effective dose.

  5. An algorithm for intelligent sorting of CT-related dose parameters

    NASA Astrophysics Data System (ADS)

    Cook, Tessa S.; Zimmerman, Stefan L.; Steingal, Scott; Boonn, William W.; Kim, Woojin

    2011-03-01

    Imaging centers nationwide are seeking innovative means to record and monitor CT-related radiation dose in light of multiple instances of patient over-exposure to medical radiation. As a solution, we have developed RADIANCE, an automated pipeline for extraction, archival and reporting of CT-related dose parameters. Estimation of whole-body effective dose from CT dose-length product (DLP)-an indirect estimate of radiation dose-requires anatomy-specific conversion factors that cannot be applied to total DLP, but instead necessitate individual anatomy-based DLPs. A challenge exists because the total DLP reported on a dose sheet often includes multiple separate examinations (e.g., chest CT followed by abdominopelvic CT). Furthermore, the individual reported series DLPs may not be clearly or consistently labeled. For example, Arterial could refer to the arterial phase of the triple liver CT or the arterial phase of a CT angiogram. To address this problem, we have designed an intelligent algorithm to parse dose sheets for multi-series CT examinations and correctly separate the total DLP into its anatomic components. The algorithm uses information from the departmental PACS to determine how many distinct CT examinations were concurrently performed. Then, it matches the number of distinct accession numbers to the series that were acquired, and anatomically matches individual series DLPs to their appropriate CT examinations. This algorithm allows for more accurate dose analytics, but there remain instances where automatic sorting is not feasible. To ultimately improve radiology patient care, we must standardize series names and exam names to unequivocally sort exams by anatomy and correctly estimate whole-body effective dose.

  6. Spreading of Neutrophils: From Activation to Migration

    PubMed Central

    Sengupta, Kheya; Aranda-Espinoza, Helim; Smith, Lee; Janmey, Paul; Hammer, Daniel

    2006-01-01

    Neutrophils rely on rapid changes in morphology to ward off invaders. Time-resolved dynamics of spreading human neutrophils after activation by the chemoattractant fMLF (formyl methionyl leucyl phenylalanine) was observed by RICM (reflection interference contrast microscopy). An image-processing algorithm was developed to identify the changes in the overall cell shape and the zones of close contact with the substrate. We show that in the case of neutrophils, cell spreading immediately after exposure of fMLF is anisotropic and directional. The dependence of spreading area, A, of the cell as a function of time, t, shows several distinct regimes, each of which can be fitted as power laws (A ∼ tb). The different spreading regimes correspond to distinct values of the exponent b and are related to the adhesion state of the cell. Treatment with cytochalasin-B eliminated the anisotropy in the spreading. PMID:17012330

  7. Dose reduction potential of iterative reconstruction algorithms in neck CTA-a simulation study.

    PubMed

    Ellmann, Stephan; Kammerer, Ferdinand; Allmendinger, Thomas; Brand, Michael; Janka, Rolf; Hammon, Matthias; Lell, Michael M; Uder, Michael; Kramer, Manuel

    2016-10-01

    This study aimed to determine the degree of radiation dose reduction in neck CT angiography (CTA) achievable with Sinogram-affirmed iterative reconstruction (SAFIRE) algorithms. 10 consecutive patients scheduled for neck CTA were included in this study. CTA images of the external carotid arteries either were reconstructed with filtered back projection (FBP) at full radiation dose level or underwent simulated dose reduction by proprietary reconstruction software. The dose-reduced images were reconstructed using either SAFIRE 3 or SAFIRE 5 and compared with full-dose FBP images in terms of vessel definition. 5 observers performed a total of 3000 pairwise comparisons. SAFIRE allowed substantial radiation dose reductions in neck CTA while maintaining vessel definition. The possible levels of radiation dose reduction ranged from approximately 34 to approximately 90% and depended on the SAFIRE algorithm strength and the size of the vessel of interest. In general, larger vessels permitted higher degrees of radiation dose reduction, especially with higher SAFIRE strength levels. With small vessels, the superiority of SAFIRE 5 over SAFIRE 3 was lost. Neck CTA can be performed with substantially less radiation dose when SAFIRE is applied. The exact degree of radiation dose reduction should be adapted to the clinical question, in particular to the smallest vessel needing excellent definition.

  8. SU-E-T-373: Evaluation and Reduction of Contralateral Skin /subcutaneous Dose for Tangential Breast Irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butson, M; Carroll, S; Whitaker, M

    2015-06-15

    Purpose: Tangential breast irradiation is a standard treatment technique for breast cancer therapy. One aspect of dose delivery includes dose delivered to the skin caused by electron contamination. This effect is especially important for highly oblique beams used on the medical tangent where the electron contamination deposits dose on the contralateral breast side. This work aims to investigate and predict as well as define a method to reduce this dose during tangential breast radiotherapy. Methods: Analysis and calculation of breast skin and subcutaneous dose is performed using a Varian Eclipse planning system, AAA algorithm for 6MV x-ray treatments. Measurements weremore » made using EBT3 Gafchromic film to verify the accuracy of planning data. Various materials were tested to assess their ability to remove electron contamination on the contralateral breast. Results: Results showed that the Varian Eclipse AAA algorithm could accurately estimate contralateral breast dose in the build-up region at depths of 2mm or deeper. Surface dose was underestimated by the AAA algorithm. Doses up to 12% of applied dose were seen on the contralateral breast surface and up to 9 % at 2mm depth. Due to the nature of this radiation, being mainly low energy electron contamination, a bolus material could be used to reduce this dose to less than 3%. This is accomplished by 10 mm of superflab bolus or by 1 mm of lead. Conclusion: Contralateral breast skin and subcutaneous dose is present for tangential breast treatment and has been measured to be up to 12% of applied dose from the medial tangent beam. This dose is deposited at shallow depths and is accurately calculated by the Eclipse AAA algorithm at depths of 2mm or greater. Bolus material placed over the contralateral can be used to effectively reduce this skin dose.« less

  9. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms.

    PubMed

    Tang, Jie; Nett, Brian E; Chen, Guang-Hong

    2009-10-07

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  10. Percentage depth dose calculation accuracy of model based algorithms in high energy photon small fields through heterogeneous media and comparison with plastic scintillator dosimetry.

    PubMed

    Alagar, Ananda Giri Babu; Mani, Ganesh Kadirampatti; Karunakaran, Kaviarasu

    2016-01-08

    Small fields smaller than 4 × 4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model-based algorithms, X-ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS-Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth-of-dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth-dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1 × 1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1 × 1 cm2 field showed maximum deviation, except in 6MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower-density materials compared to high-density materials.

  11. Can image enhancement allow radiation dose to be reduced whilst maintaining the perceived diagnostic image quality required for coronary angiography?

    PubMed Central

    Joshi, Anuja; Gislason-Lee, Amber J; Keeble, Claire; Sivananthan, Uduvil M

    2017-01-01

    Objective: The aim of this research was to quantify the reduction in radiation dose facilitated by image processing alone for percutaneous coronary intervention (PCI) patient angiograms, without reducing the perceived image quality required to confidently make a diagnosis. Methods: Incremental amounts of image noise were added to five PCI angiograms, simulating the angiogram as having been acquired at corresponding lower dose levels (10–89% dose reduction). 16 observers with relevant experience scored the image quality of these angiograms in 3 states—with no image processing and with 2 different modern image processing algorithms applied. These algorithms are used on state-of-the-art and previous generation cardiac interventional X-ray systems. Ordinal regression allowing for random effects and the delta method were used to quantify the dose reduction possible by the processing algorithms, for equivalent image quality scores. Results: Observers rated the quality of the images processed with the state-of-the-art and previous generation image processing with a 24.9% and 15.6% dose reduction, respectively, as equivalent in quality to the unenhanced images. The dose reduction facilitated by the state-of-the-art image processing relative to previous generation processing was 10.3%. Conclusion: Results demonstrate that statistically significant dose reduction can be facilitated with no loss in perceived image quality using modern image enhancement; the most recent processing algorithm was more effective in preserving image quality at lower doses. Advances in knowledge: Image enhancement was shown to maintain perceived image quality in coronary angiography at a reduced level of radiation dose using computer software to produce synthetic images from real angiograms simulating a reduction in dose. PMID:28124572

  12. A Survey of Singular Value Decomposition Methods and Performance Comparison of Some Available Serial Codes

    NASA Technical Reports Server (NTRS)

    Plassman, Gerald E.

    2005-01-01

    This contractor report describes a performance comparison of available alternative complete Singular Value Decomposition (SVD) methods and implementations which are suitable for incorporation into point spread function deconvolution algorithms. The report also presents a survey of alternative algorithms, including partial SVD's special case SVD's, and others developed for concurrent processing systems.

  13. Monte Carlo uncertainty analysis of dose estimates in radiochromic film dosimetry with single-channel and multichannel algorithms.

    PubMed

    Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio

    2018-03-01

    To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  14. Development of a pharmacogenetic-guided warfarin dosing algorithm for Puerto Rican patients.

    PubMed

    Ramos, Alga S; Seip, Richard L; Rivera-Miranda, Giselle; Felici-Giovanini, Marcos E; Garcia-Berdecia, Rafael; Alejandro-Cowan, Yirelia; Kocherla, Mohan; Cruz, Iadelisse; Feliu, Juan F; Cadilla, Carmen L; Renta, Jessica Y; Gorowski, Krystyna; Vergara, Cunegundo; Ruaño, Gualberto; Duconge, Jorge

    2012-12-01

    This study was aimed at developing a pharmacogenetic-driven warfarin-dosing algorithm in 163 admixed Puerto Rican patients on stable warfarin therapy. A multiple linear-regression analysis was performed using log-transformed effective warfarin dose as the dependent variable, and combining CYP2C9 and VKORC1 genotyping with other relevant nongenetic clinical and demographic factors as independent predictors. The model explained more than two-thirds of the observed variance in the warfarin dose among Puerto Ricans, and also produced significantly better 'ideal dose' estimates than two pharmacogenetic models and clinical algorithms published previously, with the greatest benefit seen in patients ultimately requiring <7 mg/day. We also assessed the clinical validity of the model using an independent validation cohort of 55 Puerto Rican patients from Hartford, CT, USA (R(2) = 51%). Our findings provide the basis for planning prospective pharmacogenetic studies to demonstrate the clinical utility of genotyping warfarin-treated Puerto Rican patients.

  15. Testing of the analytical anisotropic algorithm for photon dose calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esch, Ann van; Tillikainen, Laura; Pyykkonen, Jukka

    2006-11-15

    The analytical anisotropic algorithm (AAA) was implemented in the Eclipse (Varian Medical Systems) treatment planning system to replace the single pencil beam (SPB) algorithm for the calculation of dose distributions for photon beams. AAA was developed to improve the dose calculation accuracy, especially in heterogeneous media. The total dose deposition is calculated as the superposition of the dose deposited by two photon sources (primary and secondary) and by an electron contamination source. The photon dose is calculated as a three-dimensional convolution of Monte-Carlo precalculated scatter kernels, scaled according to the electron density matrix. For the configuration of AAA, an optimizationmore » algorithm determines the parameters characterizing the multiple source model by optimizing the agreement between the calculated and measured depth dose curves and profiles for the basic beam data. We have combined the acceptance tests obtained in three different departments for 6, 15, and 18 MV photon beams. The accuracy of AAA was tested for different field sizes (symmetric and asymmetric) for open fields, wedged fields, and static and dynamic multileaf collimation fields. Depth dose behavior at different source-to-phantom distances was investigated. Measurements were performed on homogeneous, water equivalent phantoms, on simple phantoms containing cork inhomogeneities, and on the thorax of an anthropomorphic phantom. Comparisons were made among measurements, AAA, and SPB calculations. The optimization procedure for the configuration of the algorithm was successful in reproducing the basic beam data with an overall accuracy of 3%, 1 mm in the build-up region, and 1%, 1 mm elsewhere. Testing of the algorithm in more clinical setups showed comparable results for depth dose curves, profiles, and monitor units of symmetric open and wedged beams below d{sub max}. The electron contamination model was found to be suboptimal to model the dose around d{sub max}, especially for physical wedges at smaller source to phantom distances. For the asymmetric field verification, absolute dose difference of up to 4% were observed for the most extreme asymmetries. Compared to the SPB, the penumbra modeling is considerably improved (1%, 1 mm). At the interface between solid water and cork, profiles show a better agreement with AAA. Depth dose curves in the cork are substantially better with AAA than with SPB. Improvements are more pronounced for 18 MV than for 6 MV. Point dose measurements in the thoracic phantom are mostly within 5%. In general, we can conclude that, compared to SPB, AAA improves the accuracy of dose calculations. Particular progress was made with respect to the penumbra and low dose regions. In heterogeneous materials, improvements are substantial and more pronounced for high (18 MV) than for low (6 MV) energies.« less

  16. Influence of radiation dose and reconstruction algorithm in MDCT assessment of airway wall thickness: A phantom study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gomez-Cardona, Daniel; Nagle, Scott K.; Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53792

    Purpose: Wall thickness (WT) is an airway feature of great interest for the assessment of morphological changes in the lung parenchyma. Multidetector computed tomography (MDCT) has recently been used to evaluate airway WT, but the potential risk of radiation-induced carcinogenesis—particularly in younger patients—might limit a wider use of this imaging method in clinical practice. The recent commercial implementation of the statistical model-based iterative reconstruction (MBIR) algorithm, instead of the conventional filtered back projection (FBP) algorithm, has enabled considerable radiation dose reduction in many other clinical applications of MDCT. The purpose of this work was to study the impact of radiationmore » dose and MBIR in the MDCT assessment of airway WT. Methods: An airway phantom was scanned using a clinical MDCT system (Discovery CT750 HD, GE Healthcare) at 4 kV levels and 5 mAs levels. Both FBP and a commercial implementation of MBIR (Veo{sup TM}, GE Healthcare) were used to reconstruct CT images of the airways. For each kV–mAs combination and each reconstruction algorithm, the contrast-to-noise ratio (CNR) of the airways was measured, and the WT of each airway was measured and compared with the nominal value; the relative bias and the angular standard deviation in the measured WT were calculated. For each airway and reconstruction algorithm, the overall performance of WT quantification across all of the 20 kV–mAs combinations was quantified by the sum of squares (SSQs) of the difference between the measured and nominal WT values. Finally, the particular kV–mAs combination and reconstruction algorithm that minimized radiation dose while still achieving a reference WT quantification accuracy level was chosen as the optimal acquisition and reconstruction settings. Results: The wall thicknesses of seven airways of different sizes were analyzed in the study. Compared with FBP, MBIR improved the CNR of the airways, particularly at low radiation dose levels. For FBP, the relative bias and the angular standard deviation of the measured WT increased steeply with decreasing radiation dose. Except for the smallest airway, MBIR enabled significant reduction in both the relative bias and angular standard deviation of the WT, particularly at low radiation dose levels; the SSQ was reduced by 50%–96% by using MBIR. The optimal reconstruction algorithm was found to be MBIR for the seven airways being assessed, and the combined use of MBIR and optimal kV–mAs selection resulted in a radiation dose reduction of 37%–83% compared with a reference scan protocol with a dose level of 1 mGy. Conclusions: The quantification accuracy of airway WT is strongly influenced by radiation dose and reconstruction algorithm. The MBIR algorithm potentially allows the desired WT quantification accuracy to be achieved with reduced radiation dose, which may enable a wider clinical use of MDCT for the assessment of airway WT, particularly for younger patients who may be more sensitive to exposures with ionizing radiation.« less

  17. TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuemann, J; Grassberger, C; Paganetti, H

    2014-06-15

    Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50)more » were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend treatment plan verification using Monte Carlo simulations for patients with complex geometries.« less

  18. Rapid code acquisition algorithms employing PN matched filters

    NASA Technical Reports Server (NTRS)

    Su, Yu T.

    1988-01-01

    The performance of four algorithms using pseudonoise matched filters (PNMFs), for direct-sequence spread-spectrum systems, is analyzed. They are: parallel search with fix dwell detector (PL-FDD), parallel search with sequential detector (PL-SD), parallel-serial search with fix dwell detector (PS-FDD), and parallel-serial search with sequential detector (PS-SD). The operation characteristic for each detector and the mean acquisition time for each algorithm are derived. All the algorithms are studied in conjunction with the noncoherent integration technique, which enables the system to operate in the presence of data modulation. Several previous proposals using PNMF are seen as special cases of the present algorithms.

  19. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  20. Measuring radiation dose in computed tomography using elliptic phantom and free-in-air, and evaluating iterative metal artifact reduction algorithm

    NASA Astrophysics Data System (ADS)

    Morgan, Ashraf

    The need for an accurate and reliable way for measuring patient dose in multi-row detector computed tomography (MDCT) has increased significantly. This research was focusing on the possibility of measuring CT dose in air to estimate Computed Tomography Dose Index (CTDI) for routine quality control purposes. New elliptic CTDI phantom that better represent human geometry was manufactured for investigating the effect of the subject shape on measured CTDI. Monte Carlo simulation was utilized in order to determine the dose distribution in comparison to the traditional cylindrical CTDI phantom. This research also investigated the effect of Siemens health care newly developed iMAR (iterative metal artifact reduction) algorithm, arthroplasty phantom was designed and manufactured that purpose. The design of new phantoms was part of the research as they mimic the human geometry more than the existing CTDI phantom. The standard CTDI phantom is a right cylinder that does not adequately represent the geometry of the majority of the patient population. Any dose reduction algorithm that is used during patient scan will not be utilized when scanning the CTDI phantom, so a better-designed phantom will allow the use of dose reduction algorithms when measuring dose, which leads to better dose estimation and/or better understanding of dose delivery. Doses from a standard CTDI phantom and the newly-designed phantoms were compared to doses measured in air. Iterative reconstruction is a promising technique in MDCT dose reduction and artifacts correction. Iterative reconstruction algorithms have been developed to address specific imaging tasks as is the case with Iterative Metal Artifact Reduction or iMAR which was developed by Siemens and is to be in use with the companys future computed tomography platform. The goal of iMAR is to reduce metal artifact when imaging patients with metal implants and recover CT number of tissues adjacent to the implant. This research evaluated iMAR capability of recovering CT numbers and reducing noise. Also, the use of iMAR should allow using lower tube voltage instead of 140 KVp which is used frequently to image patients with shoulder implants. The evaluations of image quality and dose reduction were carried out using an arthroplasty phantom.

  1. Commissioning and initial acceptance tests for a commercial convolution dose calculation algorithm for radiotherapy treatment planning in comparison with Monte Carlo simulation and measurement

    PubMed Central

    Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen

    2012-01-01

    In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081

  2. Warfarin pharmacogenetics: a single VKORC1 polymorphism is predictive of dose across 3 racial groups.

    PubMed

    Limdi, Nita A; Wadelius, Mia; Cavallari, Larisa; Eriksson, Niclas; Crawford, Dana C; Lee, Ming-Ta M; Chen, Chien-Hsiun; Motsinger-Reif, Alison; Sagreiya, Hersh; Liu, Nianjun; Wu, Alan H B; Gage, Brian F; Jorgensen, Andrea; Pirmohamed, Munir; Shin, Jae-Gook; Suarez-Kurtz, Guilherme; Kimmel, Stephen E; Johnson, Julie A; Klein, Teri E; Wagner, Michael J

    2010-05-06

    Warfarin-dosing algorithms incorporating CYP2C9 and VKORC1 -1639G>A improve dose prediction compared with algorithms based solely on clinical and demographic factors. However, these algorithms better capture dose variability among whites than Asians or blacks. Herein, we evaluate whether other VKORC1 polymorphisms and haplotypes explain additional variation in warfarin dose beyond that explained by VKORC1 -1639G>A among Asians (n = 1103), blacks (n = 670), and whites (n = 3113). Participants were recruited from 11 countries as part of the International Warfarin Pharmacogenetics Consortium effort. Evaluation of the effects of individual VKORC1 single nucleotide polymorphisms (SNPs) and haplotypes on warfarin dose used both univariate and multi variable linear regression. VKORC1 -1639G>A and 1173C>T individually explained the greatest variance in dose in all 3 racial groups. Incorporation of additional VKORC1 SNPs or haplotypes did not further improve dose prediction. VKORC1 explained greater variability in dose among whites than blacks and Asians. Differences in the percentage of variance in dose explained by VKORC1 across race were largely accounted for by the frequency of the -1639A (or 1173T) allele. Thus, clinicians should recognize that, although at a population level, the contribution of VKORC1 toward dose requirements is higher in whites than in nonwhites; genotype predicts similar dose requirements across racial groups.

  3. SU-E-T-344: Validation and Clinical Experience of Eclipse Electron Monte Carlo Algorithm (EMC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokharel, S; Rana, S

    2014-06-01

    Purpose: The purpose of this study is to validate Eclipse Electron Monte Carlo (Algorithm for routine clinical uses. Methods: The PTW inhomogeneity phantom (T40037) with different combination of heterogeneous slabs has been CT-scanned with Philips Brilliance 16 slice scanner. The phantom contains blocks of Rando Alderson materials mimicking lung, Polystyrene (Tissue), PTFE (Bone) and PMAA. The phantom has 30×30×2.5 cm base plate with 2cm recesses to insert inhomogeneity. The detector systems used in this study are diode, tlds and Gafchromic EBT2 films. The diode and tlds were included in CT scans. The CT sets are transferred to Eclipse treatment planningmore » system. Several plans have been created with Eclipse Monte Carlo (EMC) algorithm 11.0.21. Measurements have been carried out in Varian TrueBeam machine for energy from 6–22mev. Results: The measured and calculated doses agreed very well for tissue like media. The agreement was reasonably okay for the presence of lung inhomogeneity. The point dose agreement was within 3.5% and Gamma passing rate at 3%/3mm was greater than 93% except for 6Mev(85%). The disagreement can reach as high as 10% in the presence of bone inhomogeneity. This is due to eclipse reporting dose to the medium as opposed to the dose to the water as in conventional calculation engines. Conclusion: Care must be taken when using Varian Eclipse EMC algorithm for dose calculation for routine clinical uses. The algorithm dose not report dose to water in which most of the clinical experiences are based on rather it just reports dose to medium directly. In the presence of inhomogeneity such as bone, the dose discrepancy can be as high as 10% or even more depending on the location of normalization point or volume. As Radiation oncology as an empirical science, care must be taken before using EMC reported monitor units for clinical uses.« less

  4. Fast 3D dosimetric verifications based on an electronic portal imaging device using a GPU calculation engine.

    PubMed

    Zhu, Jinhan; Chen, Lixin; Chen, Along; Luo, Guangwen; Deng, Xiaowu; Liu, Xiaowei

    2015-04-11

    To use a graphic processing unit (GPU) calculation engine to implement a fast 3D pre-treatment dosimetric verification procedure based on an electronic portal imaging device (EPID). The GPU algorithm includes the deconvolution and convolution method for the fluence-map calculations, the collapsed-cone convolution/superposition (CCCS) algorithm for the 3D dose calculations and the 3D gamma evaluation calculations. The results of the GPU-based CCCS algorithm were compared to those of Monte Carlo simulations. The planned and EPID-based reconstructed dose distributions in overridden-to-water phantoms and the original patients were compared for 6 MV and 10 MV photon beams in intensity-modulated radiation therapy (IMRT) treatment plans based on dose differences and gamma analysis. The total single-field dose computation time was less than 8 s, and the gamma evaluation for a 0.1-cm grid resolution was completed in approximately 1 s. The results of the GPU-based CCCS algorithm exhibited good agreement with those of the Monte Carlo simulations. The gamma analysis indicated good agreement between the planned and reconstructed dose distributions for the treatment plans. For the target volume, the differences in the mean dose were less than 1.8%, and the differences in the maximum dose were less than 2.5%. For the critical organs, minor differences were observed between the reconstructed and planned doses. The GPU calculation engine was used to boost the speed of 3D dose and gamma evaluation calculations, thus offering the possibility of true real-time 3D dosimetric verification.

  5. A comparison between anisotropic analytical and multigrid superposition dose calculation algorithms in radiotherapy treatment planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Vincent W.C., E-mail: htvinwu@polyu.edu.hk; Tse, Teddy K.H.; Ho, Cola L.M.

    2013-07-01

    Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each casemore » by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time.« less

  6. Optimal Link Removal for Epidemic Mitigation: A Two-Way Partitioning Approach

    PubMed Central

    Enns, Eva A.; Mounzer, Jeffrey J.; Brandeau, Margaret L.

    2011-01-01

    The structure of the contact network through which a disease spreads may influence the optimal use of resources for epidemic control. In this work, we explore how to minimize the spread of infection via quarantining with limited resources. In particular, we examine which links should be removed from the contact network, given a constraint on the number of removable links, such that the number of nodes which are no longer at risk for infection is maximized. We show how this problem can be posed as a non-convex quadratically constrained quadratic program (QCQP), and we use this formulation to derive a link removal algorithm. The performance of our QCQP-based algorithm is validated on small Erdős-Renyi and small-world random graphs, and then tested on larger, more realistic networks, including a real-world network of injection drug use. We show that our approach achieves near optimal performance and out-perform so ther intuitive link removal algorithms, such as removing links in order of edge centrality. PMID:22115862

  7. A Pilot Study Measuring the Distribution and Permeability of a Vaginal HIV Microbicide Gel Vehicle Using Magnetic Resonance Imaging, Single Photon Emission Computed Tomography/Computed Tomography, and a Radiolabeled Small Molecule.

    PubMed

    Fuchs, Edward J; Schwartz, Jill L; Friend, David R; Coleman, Jenell S; Hendrix, Craig W

    2015-11-01

    Vaginal microbicide gels containing tenofovir have proven effective in HIV prevention, offering the advantage of reduced systemic toxicity. We studied the vaginal distribution and effect on mucosal permeability of a gel vehicle. Six premenopausal women were enrolled. In Phase 1, a spreading gel containing (99m)technetium-DTPA ((99m)Tc) radiolabel and gadolinium contrast for magnetic resonance imaging (MRI) was dosed intravaginally. MRI was obtained at 0.5, 4, and 24 h, and single photon emission computed tomography with conventional computed tomography (SPECT/CT) at 1.5, 5, and 25 h postdosing. Pads and tissues were measured for activity to determine gel loss. In Phase 2, nonoxynol-9 (N-9), containing (99m)Tc-DTPA, was dosed as a permeability control; permeability was measured in blood and urine for both phases. SPECT/CT showed the distribution of spreading gel throughout the vagina with the highest concentration of radiosignal in the fornices and ectocervix; signal intensity diminished over 25 h. MRI showed the greatest signal accumulation in the fornices, most notably 1-4 h postdosing. The median (interquartile range) isotope signal loss from the vagina through 6 h was 29.1% (15.8-39.9%). Mucosal permeability to (99m)Tc-DTPA following spreading gel was negligible, in contrast to N-9, with detectable radiosignal in plasma, peaking at 8 h (5-12). Following spreading gel dosing, 0.004% (0.001-2.04%) of the radiosignal accumulated in urine over 12 h compared to 8.31% (7.07-11.01%) with N-9, (p=0.043). Spreading gel distributed variably throughout the vagina, persisting for 24 h, with signal concentrating in the fornices and ectocervix. The spreading gel had no significant effect on vaginal mucosal permeability.

  8. Skin dose mapping for non-uniform x-ray fields using a backscatter point spread function

    NASA Astrophysics Data System (ADS)

    Vijayan, Sarath; Xiong, Zhenyu; Shankar, Alok; Rudin, Stephen; Bednarek, Daniel R.

    2017-03-01

    Beam shaping devices like ROI attenuators and compensation filters modulate the intensity distribution of the xray beam incident on the patient. This results in a spatial variation of skin dose due to the variation of primary radiation and also a variation in backscattered radiation from the patient. To determine the backscatter component, backscatter point spread functions (PSF) are generated using EGS Monte-Carlo software. For this study, PSF's were determined by simulating a 1 mm beam incident on the lateral surface of an anthropomorphic head phantom and a 20 cm thick PMMA block phantom. The backscatter PSF's for the head phantom and PMMA phantom are curve fit with a Lorentzian function after being normalized to the primary dose intensity (PSFn). PSFn is convolved with the primary dose distribution to generate the scatter dose distribution, which is added to the primary to obtain the total dose distribution. The backscatter convolution technique is incorporated in the dose tracking system (DTS), which tracks skin dose during fluoroscopic procedures and provides a color map of the dose distribution on a 3D patient graphic model. A convolution technique is developed for the backscatter dose determination for the nonuniformly spaced graphic-model surface vertices. A Gafchromic film validation was performed for shaped x-ray beams generated with an ROI attenuator and with two compensation filters inserted into the field. The total dose distribution calculated by the backscatter convolution technique closely agreed with that measured with the film.

  9. SU-E-T-188: Film Dosimetry Verification of Monte Carlo Generated Electron Treatment Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enright, S; Asprinio, A; Lu, L

    2014-06-01

    Purpose: The purpose of this study was to compare dose distributions from film measurements to Monte Carlo generated electron treatment plans. Irradiation with electrons offers the advantages of dose uniformity in the target volume and of minimizing the dose to deeper healthy tissue. Using the Monte Carlo algorithm will improve dose accuracy in regions with heterogeneities and irregular surfaces. Methods: Dose distributions from GafChromic{sup ™} EBT3 films were compared to dose distributions from the Electron Monte Carlo algorithm in the Eclipse{sup ™} radiotherapy treatment planning system. These measurements were obtained for 6MeV, 9MeV and 12MeV electrons at two depths. Allmore » phantoms studied were imported into Eclipse by CT scan. A 1 cm thick solid water template with holes for bonelike and lung-like plugs was used. Different configurations were used with the different plugs inserted into the holes. Configurations with solid-water plugs stacked on top of one another were also used to create an irregular surface. Results: The dose distributions measured from the film agreed with those from the Electron Monte Carlo treatment plan. Accuracy of Electron Monte Carlo algorithm was also compared to that of Pencil Beam. Dose distributions from Monte Carlo had much higher pass rates than distributions from Pencil Beam when compared to the film. The pass rate for Monte Carlo was in the 80%–99% range, where the pass rate for Pencil Beam was as low as 10.76%. Conclusion: The dose distribution from Monte Carlo agreed with the measured dose from the film. When compared to the Pencil Beam algorithm, pass rates for Monte Carlo were much higher. Monte Carlo should be used over Pencil Beam for regions with heterogeneities and irregular surfaces.« less

  10. A point kernel algorithm for microbeam radiation therapy

    NASA Astrophysics Data System (ADS)

    Debus, Charlotte; Oelfke, Uwe; Bartzsch, Stefan

    2017-11-01

    Microbeam radiation therapy (MRT) is a treatment approach in radiation therapy where the treatment field is spatially fractionated into arrays of a few tens of micrometre wide planar beams of unusually high peak doses separated by low dose regions of several hundred micrometre width. In preclinical studies, this treatment approach has proven to spare normal tissue more effectively than conventional radiation therapy, while being equally efficient in tumour control. So far dose calculations in MRT, a prerequisite for future clinical applications are based on Monte Carlo simulations. However, they are computationally expensive, since scoring volumes have to be small. In this article a kernel based dose calculation algorithm is presented that splits the calculation into photon and electron mediated energy transport, and performs the calculation of peak and valley doses in typical MRT treatment fields within a few minutes. Kernels are analytically calculated depending on the energy spectrum and material composition. In various homogeneous materials peak, valley doses and microbeam profiles are calculated and compared to Monte Carlo simulations. For a microbeam exposure of an anthropomorphic head phantom calculated dose values are compared to measurements and Monte Carlo calculations. Except for regions close to material interfaces calculated peak dose values match Monte Carlo results within 4% and valley dose values within 8% deviation. No significant differences are observed between profiles calculated by the kernel algorithm and Monte Carlo simulations. Measurements in the head phantom agree within 4% in the peak and within 10% in the valley region. The presented algorithm is attached to the treatment planning platform VIRTUOS. It was and is used for dose calculations in preclinical and pet-clinical trials at the biomedical beamline ID17 of the European synchrotron radiation facility in Grenoble, France.

  11. A Pharmacogenetics-Based Warfarin Maintenance Dosing Algorithm from Northern Chinese Patients

    PubMed Central

    Luo, Fang; Wang, Jin'e; Shi, Yi; Tan, Yu; Chen, Qianlong; Zhang, Yu; Hui, Rutai; Wang, Yibo

    2014-01-01

    Inconsistent associations with warfarin dose were observed in genetic variants except VKORC1 haplotype and CYP2C9*3 in Chinese people, and few studies on warfarin dose algorithm was performed in a large Chinese Han population lived in Northern China. Of 787 consenting patients with heart-valve replacements who were receiving long-term warfarin maintenance therapy, 20 related Single nucleotide polymorphisms were genotyped. Only VKORC1 and CYP2C9 SNPs were observed to be significantly associated with warfarin dose. In the derivation cohort (n = 551), warfarin dose variability was influenced, in decreasing order, by VKORC1 rs7294 (27.3%), CYP2C9*3(7.0%), body surface area(4.2%), age(2.7%), target INR(1.4%), CYP4F2 rs2108622 (0.7%), amiodarone use(0.6%), diabetes mellitus(0.6%), and digoxin use(0.5%), which account for 45.1% of the warfarin dose variability. In the validation cohort (n = 236), the actual maintenance dose was significantly correlated with predicted dose (r = 0.609, P<0.001). Our algorithm could improve the personalized management of warfarin use in Northern Chinese patients. PMID:25126975

  12. SU-E-T-397: Evaluation of Planned Dose Distributions by Monte Carlo (0.5%) and Ray Tracing Algorithm for the Spinal Tumors with CyberKnife

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, H; Brindle, J; Hepel, J

    2015-06-15

    Purpose: To analyze and evaluate dose distribution between Ray Tracing (RT) and Monte Carlo (MC) algorithms of 0.5% uncertainty on a critical structure of spinal cord and gross target volume and planning target volume. Methods: Twenty four spinal tumor patients were treated with stereotactic body radiotherapy (SBRT) by CyberKnife in 2013 and 2014. The MC algorithm with 0.5% of uncertainty is used to recalculate the dose distribution for the treatment plan of the patients using the same beams, beam directions, and monitor units (MUs). Results: The prescription doses are uniformly larger for MC plans than RT except one case. Upmore » to a factor of 1.19 for 0.25cc threshold volume and 1.14 for 1.2cc threshold volume of dose differences are observed for the spinal cord. Conclusion: The MC recalculated dose distributions are larger than the original MC calculations for the spinal tumor cases. Based on the accuracy of the MC calculations, more radiation dose might be delivered to the tumor targets and spinal cords with the increase prescription dose.« less

  13. Blind deconvolution of astronomical images with band limitation determined by optical system parameters

    NASA Astrophysics Data System (ADS)

    Luo, L.; Fan, M.; Shen, M. Z.

    2007-07-01

    Atmospheric turbulence greatly limits the spatial resolution of astronomical images acquired by the large ground-based telescope. The record image obtained from telescope was thought as a convolution result of the object function and the point spread function. The statistic relationship of the images measured data, the estimated object and point spread function was in accord with the Bayes conditional probability distribution, and the maximum-likelihood formulation was found. A blind deconvolution approach based on the maximum-likelihood estimation technique with real optical band limitation constraint is presented for removing the effect of atmospheric turbulence on this class images through the minimization of the convolution error function by use of the conjugation gradient optimization algorithm. As a result, the object function and the point spread function could be estimated from a few record images at the same time by the blind deconvolution algorithm. According to the principle of Fourier optics, the relationship between the telescope optical system parameters and the image band constraint in the frequency domain was formulated during the image processing transformation between the spatial domain and the frequency domain. The convergence of the algorithm was increased by use of having the estimated function variable (also is the object function and the point spread function) nonnegative and the point-spread function band limited. Avoiding Fourier transform frequency components beyond the cut off frequency lost during the image processing transformation when the size of the sampled image data, image spatial domain and frequency domain were the same respectively, the detector element (e.g. a pixels in the CCD) should be less than the quarter of the diffraction speckle diameter of the telescope for acquiring the images on the focal plane. The proposed method can easily be applied to the case of wide field-view turbulent-degraded images restoration because of no using the object support constraint in the algorithm. The performance validity of the method is examined by the computer simulation and the restoration of the real Alpha Psc astronomical image data. The results suggest that the blind deconvolution with the real optical band constraint can remove the effect of the atmospheric turbulence on the observed images and the spatial resolution of the object image can arrive at or exceed the diffraction-limited level.

  14. SU-E-T-268: Differences in Treatment Plan Quality and Delivery Between Two Commercial Treatment Planning Systems for Volumetric Arc-Based Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, S; Zhang, H; Zhang, B

    2015-06-15

    Purpose: To clinically evaluate the differences in volumetric modulated arc therapy (VMAT) treatment plan and delivery between two commercial treatment planning systems. Methods: Two commercial VMAT treatment planning systems with different VMAT optimization algorithms and delivery approaches were evaluated. This study included 16 clinical VMAT plans performed with the first system: 2 spine, 4 head and neck (HN), 2 brain, 4 pancreas, and 4 pelvis plans. These 16 plans were then re-optimized with the same number of arcs using the second treatment planning system. Planning goals were invariant between the two systems. Gantry speed, dose rate modulation, MLC modulation, planmore » quality, number of monitor units (MUs), VMAT quality assurance (QA) results, and treatment delivery time were compared between the 2 systems. VMAT QA results were performed using Mapcheck2 and analyzed with gamma analysis (3mm/3% and 2mm/2%). Results: Similar plan quality was achieved with each VMAT optimization algorithm, and the difference in delivery time was minimal. Algorithm 1 achieved planning goals by highly modulating the MLC (total distance traveled by leaves (TL) = 193 cm average over control points per plan), while maintaining a relatively constant dose rate (dose-rate change <100 MU/min). Algorithm 2 involved less MLC modulation (TL = 143 cm per plan), but greater dose-rate modulation (range = 0-600 MU/min). The average number of MUs was 20% less for algorithm 2 (ratio of MUs for algorithms 2 and 1 ranged from 0.5-1). VMAT QA results were similar for all disease sites except HN plans. For HN plans, the average gamma passing rates were 88.5% (2mm/2%) and 96.9% (3mm/3%) for algorithm 1 and 97.9% (2mm/2%) and 99.6% (3mm/3%) for algorithm 2. Conclusion: Both VMAT optimization algorithms achieved comparable plan quality; however, fewer MUs were needed and QA results were more robust for Algorithm 2, which more highly modulated dose rate.« less

  15. Improving the estimation of mealtime insulin dose in adults with type 1 diabetes: the Normal Insulin Demand for Dose Adjustment (NIDDA) study.

    PubMed

    Bao, Jiansong; Gilbertson, Heather R; Gray, Robyn; Munns, Diane; Howard, Gabrielle; Petocz, Peter; Colagiuri, Stephen; Brand-Miller, Jennie C

    2011-10-01

    Although carbohydrate counting is routine practice in type 1 diabetes, hyperglycemic episodes are common. A food insulin index (FII) has been developed and validated for predicting the normal insulin demand generated by mixed meals in healthy adults. We sought to compare a novel algorithm on the basis of the FII for estimating mealtime insulin dose with carbohydrate counting in adults with type 1 diabetes. A total of 28 patients using insulin pump therapy consumed two different breakfast meals of equal energy, glycemic index, fiber, and calculated insulin demand (both FII = 60) but approximately twofold difference in carbohydrate content, in random order on three consecutive mornings. On one occasion, a carbohydrate-counting algorithm was applied to meal A (75 g carbohydrate) for determining bolus insulin dose. On the other two occasions, carbohydrate counting (about half the insulin dose as meal A) and the FII algorithm (same dose as meal A) were applied to meal B (41 g carbohydrate). A real-time continuous glucose monitor was used to assess 3-h postprandial glycemia. Compared with carbohydrate counting, the FII algorithm significantly decreased glucose incremental area under the curve over 3 h (-52%, P = 0.013) and peak glucose excursion (-41%, P = 0.01) and improved the percentage of time within the normal blood glucose range (4-10 mmol/L) (31%, P = 0.001). There was no significant difference in the occurrence of hypoglycemia. An insulin algorithm based on physiological insulin demand evoked by foods in healthy subjects may be a useful tool for estimating mealtime insulin dose in patients with type 1 diabetes.

  16. Improvements in pencil beam scanning proton therapy dose calculation accuracy in brain tumor cases with a commercial Monte Carlo algorithm.

    PubMed

    Widesott, Lamberto; Lorentini, Stefano; Fracchiolla, Francesco; Farace, Paolo; Schwarz, Marco

    2018-05-04

    validation of a commercial Monte Carlo (MC) algorithm (RayStation ver6.0.024) for the treatment of brain tumours with pencil beam scanning (PBS) proton therapy, comparing it via measurements and analytical calculations in clinically realistic scenarios. Methods: For the measurements a 2D ion chamber array detector (MatriXX PT)) was placed underneath the following targets: 1) anthropomorphic head phantom (with two different thickness) and 2) a biological sample (i.e. half lamb's head). In addition, we compared the MC dose engine vs. the RayStation pencil beam (PB) algorithm clinically implemented so far, in critical conditions such as superficial targets (i.e. in need of range shifter), different air gaps and gantry angles to simulate both orthogonal and tangential beam arrangements. For every plan the PB and MC dose calculation were compared to measurements using a gamma analysis metrics (3%, 3mm). Results: regarding the head phantom the gamma passing rate (GPR) was always >96% and on average > 99% for the MC algorithm; PB algorithm had a GPR ≤90% for all the delivery configurations with single slab (apart 95 % GPR from gantry 0° and small air gap) and in case of two slabs of the head phantom the GPR was >95% only in case of small air gaps for all the three (0°, 45°,and 70°) simulated beam gantry angles. Overall the PB algorithm tends to overestimate the dose to the target (up to 25%) and underestimate the dose to the organ at risk (up to 30%). We found similar results (but a bit worse for PB algorithm) for the two targets of the lamb's head where only two beam gantry angles were simulated. Conclusions: our results suggest that in PBS proton therapy range shifter (RS) need to be used with extreme caution when planning the treatment with an analytical algorithm due to potentially great discrepancies between the planned dose and the dose delivered to the patients, also in case of brain tumours where this issue could be underestimated. Our results also suggest that a MC evaluation of the dose has to be performed every time the RS is used and, mostly, when it is used with large air gaps and beam directions tangential to the patient surface. . © 2018 Institute of Physics and Engineering in Medicine.

  17. Concept development of X-ray mass thickness detection for irradiated items upon electron beam irradiation processing

    NASA Astrophysics Data System (ADS)

    Qin, Huaili; Yang, Guang; Kuang, Shan; Wang, Qiang; Liu, Jingjing; Zhang, Xiaomin; Li, Cancan; Han, Zhiwei; Li, Yuanjing

    2018-02-01

    The present project will adopt the principle and technology of X-ray imaging to quickly measure the mass thickness (wherein the mass thickness of the item =density of the item × thickness of the item) of the irradiated items and thus to determine whether the packaging size and inside location of the item will meet the requirements for treating thickness upon electron beam irradiation processing. The development of algorithm of X-ray mass thickness detector as well as the prediction of dose distribution have been completed. The development of the algorithm was based on the X-ray attenuation. 4 standard modules, Al sheet, Al ladders, PMMA sheet and PMMA ladders, were selected for the algorithm development. The algorithm was optimized until the error between tested mass thickness and standard mass thickness was less than 5%. Dose distribution of all energy (1-10 MeV) for each mass thickness was obtained using Monte-carlo method and used for the analysis of dose distribution, which provides the information of whether the item will be penetrated or not, as well as the Max. dose, Min. dose and DUR of the whole item.

  18. Changes in prescribed doses for the Seattle neutron therapy system

    NASA Astrophysics Data System (ADS)

    Popescu, A.

    2008-06-01

    From the beginning of the neutron therapy program at the University of Washington Medical Center, the neutron dose distribution in tissue has been calculated using an in-house treatment planning system called PRISM. In order to increase the accuracy of the absorbed dose calculations, two main improvements were made to the PRISM treatment planning system: (a) the algorithm was changed by the addition of an analytical expression of the central axis wedge factor dependence with field size and depth developed at UWMC. Older versions of the treatment-planning algorithm used a constant central axis wedge factor; (b) a complete newly commissioned set of measured data was introduced in the latest version of PRISM. The new version of the PRISM algorithm allowed for the use of the wedge profiles measured at different depths instead of one wedge profile measured at one depth. The comparison of the absorbed dose calculations using the old and the improved algorithm showed discrepancies mainly due to the missing central axis wedge factor dependence with field size and depth and due to the absence of the wedge profiles at depths different from 10 cm. This study concludes that the previously reported prescribed doses for neutron therapy should be changed.

  19. Blind information-theoretic multiuser detection algorithms for DS-CDMA and WCDMA downlink systems.

    PubMed

    Waheed, Khuram; Salem, Fathi M

    2005-07-01

    Code division multiple access (CDMA) is based on the spread-spectrum technology and is a dominant air interface for 2.5G, 3G, and future wireless networks. For the CDMA downlink, the transmitted CDMA signals from the base station (BS) propagate through a noisy multipath fading communication channel before arriving at the receiver of the user equipment/mobile station (UE/MS). Classical CDMA single-user detection (SUD) algorithms implemented in the UE/MS receiver do not provide the required performance for modern high data-rate applications. In contrast, multi-user detection (MUD) approaches require a lot of a priori information not available to the UE/MS. In this paper, three promising adaptive Riemannian contra-variant (or natural) gradient based user detection approaches, capable of handling the highly dynamic wireless environments, are proposed. The first approach, blind multiuser detection (BMUD), is the process of simultaneously estimating multiple symbol sequences associated with all the users in the downlink of a CDMA communication system using only the received wireless data and without any knowledge of the user spreading codes. This approach is applicable to CDMA systems with relatively short spreading codes but becomes impractical for systems using long spreading codes. We also propose two other adaptive approaches, namely, RAKE -blind source recovery (RAKE-BSR) and RAKE-principal component analysis (RAKE-PCA) that fuse an adaptive stage into a standard RAKE receiver. This adaptation results in robust user detection algorithms with performance exceeding the linear minimum mean squared error (LMMSE) detectors for both Direct Sequence CDMA (DS-CDMA) and wide-band CDMA (WCDMA) systems under conditions of congestion, imprecise channel estimation and unmodeled multiple access interference (MAI).

  20. Comparing effects of fire modeling methods on simulated fire patterns and succession: a case study in the Missouri Ozarks

    Treesearch

    Jian Yang; Hong S. He; Brian R. Sturtevant; Brian R. Miranda; Eric J. Gustafson

    2008-01-01

    We compared four fire spread simulation methods (completely random, dynamic percolation. size-based minimum travel time algorithm. and duration-based minimum travel time algorithm) and two fire occurrence simulation methods (Poisson fire frequency model and hierarchical fire frequency model) using a two-way factorial design. We examined these treatment effects on...

  1. Predicting online ratings based on the opinion spreading process

    NASA Astrophysics Data System (ADS)

    He, Xing-Sheng; Zhou, Ming-Yang; Zhuo, Zhao; Fu, Zhong-Qian; Liu, Jian-Guo

    2015-10-01

    Predicting users' online ratings is always a challenge issue and has drawn lots of attention. In this paper, we present a rating prediction method by combining the user opinion spreading process with the collaborative filtering algorithm, where user similarity is defined by measuring the amount of opinion a user transfers to another based on the primitive user-item rating matrix. The proposed method could produce a more precise rating prediction for each unrated user-item pair. In addition, we introduce a tunable parameter λ to regulate the preferential diffusion relevant to the degree of both opinion sender and receiver. The numerical results for Movielens and Netflix data sets show that this algorithm has a better accuracy than the standard user-based collaborative filtering algorithm using Cosine and Pearson correlation without increasing computational complexity. By tuning λ, our method could further boost the prediction accuracy when using Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) as measurements. In the optimal cases, on Movielens and Netflix data sets, the corresponding algorithmic accuracy (MAE and RMSE) are improved 11.26% and 8.84%, 13.49% and 10.52% compared to the item average method, respectively.

  2. The dosimetric effects of tissue heterogeneities in intensity-modulated radiation therapy (IMRT) of the head and neck

    NASA Astrophysics Data System (ADS)

    Al-Hallaq, H. A.; Reft, C. S.; Roeske, J. C.

    2006-03-01

    The dosimetric effects of bone and air heterogeneities in head and neck IMRT treatments were quantified. An anthropomorphic RANDO phantom was CT-scanned with 16 thermoluminescent dosimeter (TLD) chips placed in and around the target volume. A standard IMRT plan generated with CORVUS was used to irradiate the phantom five times. On average, measured dose was 5.1% higher than calculated dose. Measurements were higher by 7.1% near the heterogeneities and by 2.6% in tissue. The dose difference between measurement and calculation was outside the 95% measurement confidence interval for six TLDs. Using CORVUS' heterogeneity correction algorithm, the average difference between measured and calculated doses decreased by 1.8% near the heterogeneities and by 0.7% in tissue. Furthermore, dose differences lying outside the 95% confidence interval were eliminated for five of the six TLDs. TLD doses recalculated by Pinnacle3's convolution/superposition algorithm were consistently higher than CORVUS doses, a trend that matched our measured results. These results indicate that the dosimetric effects of air cavities are larger than those of bone heterogeneities, thereby leading to a higher delivered dose compared to CORVUS calculations. More sophisticated algorithms such as convolution/superposition or Monte Carlo should be used for accurate tailoring of IMRT dose in head and neck tumours.

  3. Inter-patient image registration algorithms to disentangle regional dose bioeffects.

    PubMed

    Monti, Serena; Pacelli, Roberto; Cella, Laura; Palma, Giuseppe

    2018-03-20

    Radiation therapy (RT) technological advances call for a comprehensive reconsideration of the definition of dose features leading to radiation induced morbidity (RIM). In this context, the voxel-based approach (VBA) to dose distribution analysis in RT offers a radically new philosophy to evaluate local dose response patterns, as an alternative to dose-volume-histograms for identifying dose sensitive regions of normal tissue. The VBA relies on mapping patient dose distributions into a single reference case anatomy which serves as anchor for local dosimetric evaluations. The inter-patient elastic image registrations (EIRs) of the planning CTs provide the deformation fields necessary for the actual warp of dose distributions. In this study we assessed the impact of EIR on the VBA results in thoracic patients by identifying two state-of-the-art EIR algorithms (Demons and B-Spline). Our analysis demonstrated that both the EIR algorithms may be successfully used to highlight subregions with dose differences associated with RIM that substantially overlap. Furthermore, the inclusion for the first time of covariates within a dosimetric statistical model that faces the multiple comparison problem expands the potential of VBA, thus paving the way to a reliable voxel-based analysis of RIM in datasets with strong correlation of the outcome with non-dosimetric variables.

  4. A dose error evaluation study for 4D dose calculations

    NASA Astrophysics Data System (ADS)

    Milz, Stefan; Wilkens, Jan J.; Ullrich, Wolfgang

    2014-10-01

    Previous studies have shown that respiration induced motion is not negligible for Stereotactic Body Radiation Therapy. The intrafractional breathing induced motion influences the delivered dose distribution on the underlying patient geometry such as the lung or the abdomen. If a static geometry is used, a planning process for these indications does not represent the entire dynamic process. The quality of a full 4D dose calculation approach depends on the dose coordinate transformation process between deformable geometries. This article provides an evaluation study that introduces an advanced method to verify the quality of numerical dose transformation generated by four different algorithms. The used transformation metric value is based on the deviation of the dose mass histogram (DMH) and the mean dose throughout dose transformation. The study compares the results of four algorithms. In general, two elementary approaches are used: dose mapping and energy transformation. Dose interpolation (DIM) and an advanced concept, so called divergent dose mapping model (dDMM), are used for dose mapping. The algorithms are compared to the basic energy transformation model (bETM) and the energy mass congruent mapping (EMCM). For evaluation 900 small sample regions of interest (ROI) are generated inside an exemplary lung geometry (4DCT). A homogeneous fluence distribution is assumed for dose calculation inside the ROIs. The dose transformations are performed with the four different algorithms. The study investigates the DMH-metric and the mean dose metric for different scenarios (voxel sizes: 8 mm, 4 mm, 2 mm, 1 mm 9 different breathing phases). dDMM achieves the best transformation accuracy in all measured test cases with 3-5% lower errors than the other models. The results of dDMM are reasonable and most efficient in this study, although the model is simple and easy to implement. The EMCM model also achieved suitable results, but the approach requires a more complex programming structure. The study discloses disadvantages for the bETM and for the DIM. DIM yielded insufficient results for large voxel sizes, while bETM is prone to errors for small voxel sizes.

  5. A dose error evaluation study for 4D dose calculations.

    PubMed

    Milz, Stefan; Wilkens, Jan J; Ullrich, Wolfgang

    2014-11-07

    Previous studies have shown that respiration induced motion is not negligible for Stereotactic Body Radiation Therapy. The intrafractional breathing induced motion influences the delivered dose distribution on the underlying patient geometry such as the lung or the abdomen. If a static geometry is used, a planning process for these indications does not represent the entire dynamic process. The quality of a full 4D dose calculation approach depends on the dose coordinate transformation process between deformable geometries. This article provides an evaluation study that introduces an advanced method to verify the quality of numerical dose transformation generated by four different algorithms.The used transformation metric value is based on the deviation of the dose mass histogram (DMH) and the mean dose throughout dose transformation. The study compares the results of four algorithms. In general, two elementary approaches are used: dose mapping and energy transformation. Dose interpolation (DIM) and an advanced concept, so called divergent dose mapping model (dDMM), are used for dose mapping. The algorithms are compared to the basic energy transformation model (bETM) and the energy mass congruent mapping (EMCM). For evaluation 900 small sample regions of interest (ROI) are generated inside an exemplary lung geometry (4DCT). A homogeneous fluence distribution is assumed for dose calculation inside the ROIs. The dose transformations are performed with the four different algorithms.The study investigates the DMH-metric and the mean dose metric for different scenarios (voxel sizes: 8 mm, 4 mm, 2 mm, 1 mm; 9 different breathing phases). dDMM achieves the best transformation accuracy in all measured test cases with 3-5% lower errors than the other models. The results of dDMM are reasonable and most efficient in this study, although the model is simple and easy to implement. The EMCM model also achieved suitable results, but the approach requires a more complex programming structure. The study discloses disadvantages for the bETM and for the DIM. DIM yielded insufficient results for large voxel sizes, while bETM is prone to errors for small voxel sizes.

  6. Complete recovery from intractable complex regional pain syndrome, CRPS-type I, following anesthetic ketamine and midazolam.

    PubMed

    Kiefer, Ralph-Thomas; Rohr, Peter; Ploppa, Annette; Altemeyer, Karl-Heinz; Schwartzman, Robert Jay

    2007-06-01

    To describe the treatment of an intractable complex regional pain syndrome I (CRPS-I) patient with anesthetic doses of ketamine supplemented with midazolam. A patient presented with a rapidly progressing contiguous spread of CRPS from a severe ligamentous wrist injury. Standard pharmacological and interventional therapy successively failed to halt the spread of CRPS from the wrist to the entire right arm. Her pain was unmanageable with all standard therapy. As a last treatment option, the patient was transferred to the intensive care unit and treated on a compassionate care basis with anesthetic doses of ketamine in gradually increasing (3-5 mg/kg/h) doses in conjunction with midazolam over a period of 5 days. On the second day of the ketamine and midazolam infusion, edema, and discoloration began to resolve and increased spontaneous movement was noted. On day 6, symptoms completely resolved and infusions were tapered. The patient emerged from anesthesia completely free of pain and associated CRPS signs and symptoms. The patient has maintained this complete remission from CRPS for 8 years now. In a patient with severe spreading and refractory CRPS, a complete and long-term remission from CRPS has been obtained utilizing ketamine and midazolam in anesthetic doses. This intensive care procedure has very serious risks but no severe complications occurred. The psychiatric side effects of ketamine were successfully managed with the concomitant use of midazolam and resolved within 1 month of treatment. This case report illustrates the effectiveness and safety of high-dose ketamine in a patient with generalized, refractory CRPS.

  7. Clinical implementation and evaluation of the Acuros dose calculation algorithm.

    PubMed

    Yan, Chenyu; Combine, Anthony G; Bednarz, Greg; Lalonde, Ronald J; Hu, Bin; Dickens, Kathy; Wynn, Raymond; Pavord, Daniel C; Saiful Huq, M

    2017-09-01

    The main aim of this study is to validate the Acuros XB dose calculation algorithm for a Varian Clinac iX linac in our clinics, and subsequently compare it with the wildely used AAA algorithm. The source models for both Acuros XB and AAA were configured by importing the same measured beam data into Eclipse treatment planning system. Both algorithms were validated by comparing calculated dose with measured dose on a homogeneous water phantom for field sizes ranging from 6 cm × 6 cm to 40 cm × 40 cm. Central axis and off-axis points with different depths were chosen for the comparison. In addition, the accuracy of Acuros was evaluated for wedge fields with wedge angles from 15 to 60°. Similarly, variable field sizes for an inhomogeneous phantom were chosen to validate the Acuros algorithm. In addition, doses calculated by Acuros and AAA at the center of lung equivalent tissue from three different VMAT plans were compared to the ion chamber measured doses in QUASAR phantom, and the calculated dose distributions by the two algorithms and their differences on patients were compared. Computation time on VMAT plans was also evaluated for Acuros and AAA. Differences between dose-to-water (calculated by AAA and Acuros XB) and dose-to-medium (calculated by Acuros XB) on patient plans were compared and evaluated. For open 6 MV photon beams on the homogeneous water phantom, both Acuros XB and AAA calculations were within 1% of measurements. For 23 MV photon beams, the calculated doses were within 1.5% of measured doses for Acuros XB and 2% for AAA. Testing on the inhomogeneous phantom demonstrated that AAA overestimated doses by up to 8.96% at a point close to lung/solid water interface, while Acuros XB reduced that to 1.64%. The test on QUASAR phantom showed that Acuros achieved better agreement in lung equivalent tissue while AAA underestimated dose for all VMAT plans by up to 2.7%. Acuros XB computation time was about three times faster than AAA for VMAT plans, and computation time for other plans will be discussed at the end. Maximum difference between dose calculated by AAA and dose-to-medium by Acuros XB (Acuros_D m,m ) was 4.3% on patient plans at the isocenter, and maximum difference between D 100 calculated by AAA and by Acuros_D m,m was 11.3%. When calculating the maximum dose to spinal cord on patient plans, differences between dose calculated by AAA and Acuros_D m,m were more than 3%. Compared with AAA, Acuros XB improves accuracy in the presence of inhomogeneity, and also significantly reduces computation time for VMAT plans. Dose differences between AAA and Acuros_D w,m were generally less than the dose differences between AAA and Acuros_D m,m . Clinical practitioners should consider making Acuros XB available in clinics, however, further investigation and clarification is needed about which dose reporting mode (dose-to-water or dose-to-medium) should be used in clinics. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carver, R; Popple, R; Benhabib, S

    Purpose: To evaluate the accuracy of electron dose distribution calculated by the Varian Eclipse electron Monte Carlo (eMC) algorithm for use with recent commercially available bolus electron conformal therapy (ECT). Methods: eMC-calculated electron dose distributions for bolus ECT have been compared to those previously measured for cylindrical phantoms (retromolar trigone and nose), whose axial cross sections were based on the mid-PTV CT anatomy for each site. The phantoms consisted of SR4 muscle substitute, SR4 bone substitute, and air. The bolus ECT treatment plans were imported into the Eclipse treatment planning system and calculated using the maximum allowable histories (2×10{sup 9}),more » resulting in a statistical error of <0.2%. Smoothing was not used for these calculations. Differences between eMC-calculated and measured dose distributions were evaluated in terms of absolute dose difference as well as distance to agreement (DTA). Results: Results from the eMC for the retromolar trigone phantom showed 89% (41/46) of dose points within 3% dose difference or 3 mm DTA. There was an average dose difference of −0.12% with a standard deviation of 2.56%. Results for the nose phantom showed 95% (54/57) of dose points within 3% dose difference or 3 mm DTA. There was an average dose difference of 1.12% with a standard deviation of 3.03%. Dose calculation times for the retromolar trigone and nose treatment plans were 15 min and 22 min, respectively, using 16 processors (Intel Xeon E5-2690, 2.9 GHz) on a Varian Eclipse framework agent server (FAS). Results of this study were consistent with those previously reported for accuracy of the eMC electron dose algorithm and for the .decimal, Inc. pencil beam redefinition algorithm used to plan the bolus. Conclusion: These results show that the accuracy of the Eclipse eMC algorithm is suitable for clinical implementation of bolus ECT.« less

  9. Dosimetric verification and clinical evaluation of a new commercially available Monte Carlo-based dose algorithm for application in stereotactic body radiation therapy (SBRT) treatment planning

    NASA Astrophysics Data System (ADS)

    Fragoso, Margarida; Wen, Ning; Kumar, Sanath; Liu, Dezhi; Ryu, Samuel; Movsas, Benjamin; Munther, Ajlouni; Chetty, Indrin J.

    2010-08-01

    Modern cancer treatment techniques, such as intensity-modulated radiation therapy (IMRT) and stereotactic body radiation therapy (SBRT), have greatly increased the demand for more accurate treatment planning (structure definition, dose calculation, etc) and dose delivery. The ability to use fast and accurate Monte Carlo (MC)-based dose calculations within a commercial treatment planning system (TPS) in the clinical setting is now becoming more of a reality. This study describes the dosimetric verification and initial clinical evaluation of a new commercial MC-based photon beam dose calculation algorithm, within the iPlan v.4.1 TPS (BrainLAB AG, Feldkirchen, Germany). Experimental verification of the MC photon beam model was performed with film and ionization chambers in water phantoms and in heterogeneous solid-water slabs containing bone and lung-equivalent materials for a 6 MV photon beam from a Novalis (BrainLAB) linear accelerator (linac) with a micro-multileaf collimator (m3 MLC). The agreement between calculated and measured dose distributions in the water phantom verification tests was, on average, within 2%/1 mm (high dose/high gradient) and was within ±4%/2 mm in the heterogeneous slab geometries. Example treatment plans in the lung show significant differences between the MC and one-dimensional pencil beam (PB) algorithms within iPlan, especially for small lesions in the lung, where electronic disequilibrium effects are emphasized. Other user-specific features in the iPlan system, such as options to select dose to water or dose to medium, and the mean variance level, have been investigated. Timing results for typical lung treatment plans show the total computation time (including that for processing and I/O) to be less than 10 min for 1-2% mean variance (running on a single PC with 8 Intel Xeon X5355 CPUs, 2.66 GHz). Overall, the iPlan MC algorithm is demonstrated to be an accurate and efficient dose algorithm, incorporating robust tools for MC-based SBRT treatment planning in the routine clinical setting.

  10. A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction.

    PubMed

    Kang, Eunhee; Min, Junhong; Ye, Jong Chul

    2017-10-01

    Due to the potential risk of inducing cancer, radiation exposure by X-ray CT devices should be reduced for routine patient scanning. However, in low-dose X-ray CT, severe artifacts typically occur due to photon starvation, beam hardening, and other causes, all of which decrease the reliability of the diagnosis. Thus, a high-quality reconstruction method from low-dose X-ray CT data has become a major research topic in the CT community. Conventional model-based de-noising approaches are, however, computationally very expensive, and image-domain de-noising approaches cannot readily remove CT-specific noise patterns. To tackle these problems, we want to develop a new low-dose X-ray CT algorithm based on a deep-learning approach. We propose an algorithm which uses a deep convolutional neural network (CNN) which is applied to the wavelet transform coefficients of low-dose CT images. More specifically, using a directional wavelet transform to extract the directional component of artifacts and exploit the intra- and inter- band correlations, our deep network can effectively suppress CT-specific noise. In addition, our CNN is designed with a residual learning architecture for faster network training and better performance. Experimental results confirm that the proposed algorithm effectively removes complex noise patterns from CT images derived from a reduced X-ray dose. In addition, we show that the wavelet-domain CNN is efficient when used to remove noise from low-dose CT compared to existing approaches. Our results were rigorously evaluated by several radiologists at the Mayo Clinic and won second place at the 2016 "Low-Dose CT Grand Challenge." To the best of our knowledge, this work is the first deep-learning architecture for low-dose CT reconstruction which has been rigorously evaluated and proven to be effective. In addition, the proposed algorithm, in contrast to existing model-based iterative reconstruction (MBIR) methods, has considerable potential to benefit from large data sets. Therefore, we believe that the proposed algorithm opens a new direction in the area of low-dose CT research. © 2017 American Association of Physicists in Medicine.

  11. An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography.

    PubMed

    Treiber, O; Wanninger, F; Führ, H; Panzer, W; Regulla, D; Winkler, G

    2003-02-21

    This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing. a dose reduction by 25% has no serious influence on the detection results. whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography.

  12. An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography

    NASA Astrophysics Data System (ADS)

    Treiber, O.; Wanninger, F.; Führ, H.; Panzer, W.; Regulla, D.; Winkler, G.

    2003-02-01

    This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing, a dose reduction by 25% has no serious influence on the detection results, whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography.

  13. A Charrelation Matrix-Based Blind Adaptive Detector for DS-CDMA Systems

    PubMed Central

    Luo, Zhongqiang; Zhu, Lidong

    2015-01-01

    In this paper, a blind adaptive detector is proposed for blind separation of user signals and blind estimation of spreading sequences in DS-CDMA systems. The blind separation scheme exploits a charrelation matrix for simple computation and effective extraction of information from observation signal samples. The system model of DS-CDMA signals is modeled as a blind separation framework. The unknown user information and spreading sequence of DS-CDMA systems can be estimated only from the sampled observation signals. Theoretical analysis and simulation results show that the improved performance of the proposed algorithm in comparison with the existing conventional algorithms used in DS-CDMA systems. Especially, the proposed scheme is suitable for when the number of observation samples is less and the signal to noise ratio (SNR) is low. PMID:26287209

  14. A Charrelation Matrix-Based Blind Adaptive Detector for DS-CDMA Systems.

    PubMed

    Luo, Zhongqiang; Zhu, Lidong

    2015-08-14

    In this paper, a blind adaptive detector is proposed for blind separation of user signals and blind estimation of spreading sequences in DS-CDMA systems. The blind separation scheme exploits a charrelation matrix for simple computation and effective extraction of information from observation signal samples. The system model of DS-CDMA signals is modeled as a blind separation framework. The unknown user information and spreading sequence of DS-CDMA systems can be estimated only from the sampled observation signals. Theoretical analysis and simulation results show that the improved performance of the proposed algorithm in comparison with the existing conventional algorithms used in DS-CDMA systems. Especially, the proposed scheme is suitable for when the number of observation samples is less and the signal to noise ratio (SNR) is low.

  15. SU-F-J-23: Field-Of-View Expansion in Cone-Beam CT Reconstruction by Use of Prior Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haga, A; Magome, T; Nakano, M

    Purpose: Cone-beam CT (CBCT) has become an integral part of online patient setup in an image-guided radiation therapy (IGRT). In addition, the utility of CBCT for dose calculation has actively been investigated. However, the limited size of field-of-view (FOV) and resulted CBCT image with a lack of peripheral area of patient body prevents the reliability of dose calculation. In this study, we aim to develop an FOV expanded CBCT in IGRT system to allow the dose calculation. Methods: Three lung cancer patients were selected in this study. We collected the cone-beam projection images in the CBCT-based IGRT system (X-ray volumemore » imaging unit, ELEKTA), where FOV size of the provided CBCT with these projections was 410 × 410 mm{sup 2} (normal FOV). Using these projections, CBCT with a size of 728 × 728 mm{sup 2} was reconstructed by a posteriori estimation algorithm including a prior image constrained compressed sensing (PICCS). The treatment planning CT was used as a prior image. To assess the effectiveness of FOV expansion, a dose calculation was performed on the expanded CBCT image with region-of-interest (ROI) density mapping method, and it was compared with that of treatment planning CT as well as that of CBCT reconstructed by filtered back projection (FBP) algorithm. Results: A posteriori estimation algorithm with PICCS clearly visualized an area outside normal FOV, whereas the FBP algorithm yielded severe streak artifacts outside normal FOV due to under-sampling. The dose calculation result using the expanded CBCT agreed with that using treatment planning CT very well; a maximum dose difference was 1.3% for gross tumor volumes. Conclusion: With a posteriori estimation algorithm, FOV in CBCT can be expanded. Dose comparison results suggested that the use of expanded CBCTs is acceptable for dose calculation in adaptive radiation therapy. This study has been supported by KAKENHI (15K08691).« less

  16. Photon beam dosimetry with EBT3 film in heterogeneous regions: Application to the evaluation of dose-calculation algorithms

    NASA Astrophysics Data System (ADS)

    Jung, Hyunuk; Kum, Oyeon; Han, Youngyih; Park, Byungdo; Cheong, Kwang-Ho

    2014-12-01

    For a better understanding of the accuracy of state-of-the-art-radiation therapies, 2-dimensional dosimetry in a patient-like environment will be helpful. Therefore, the dosimetry of EBT3 films in non-water-equivalent tissues was investigated, and the accuracy of commercially-used dose-calculation algorithms was evaluated with EBT3 measurement. Dose distributions were measured with EBT3 films for an in-house-designed phantom that contained a lung or a bone substitute, i.e., an air cavity (3 × 3 × 3 cm3) or teflon (2 × 2 × 2 cm3 or 3 × 3 × 3 cm3), respectively. The phantom was irradiated with 6-MV X-rays with field sizes of 2 × 2, 3 × 3, and 5 × 5 cm2. The accuracy of EBT3 dosimetry was evaluated by comparing the measured dose with the dose obtained from Monte Carlo (MC) simulations. A dose-to-bone-equivalent material was obtained by multiplying the EBT3 measurements by the stopping power ratio (SPR). The EBT3 measurements were then compared with the predictions from four algorithms: Monte Carlo (MC) in iPlan, acuros XB (AXB), analytical anisotropic algorithm (AAA) in Eclipse, and superposition-convolution (SC) in Pinnacle. For the air cavity, the EBT3 measurements agreed with the MC calculation to within 2% on average. For teflon, the EBT3 measurements differed by 9.297% (±0.9229%) on average from the Monte Carlo calculation before dose conversion, and by 0.717% (±0.6546%) after applying the SPR. The doses calculated by using the MC, AXB, AAA, and SC algorithms for the air cavity differed from the EBT3 measurements on average by 2.174, 2.863, 18.01, and 8.391%, respectively; for teflon, the average differences were 3.447, 4.113, 7.589, and 5.102%. The EBT3 measurements corrected with the SPR agreed with 2% on average both within and beyond the heterogeneities with MC results, thereby indicating that EBT3 dosimetry can be used in heterogeneous media. The MC and the AXB dose calculation algorithms exhibited clinically-acceptable accuracy (<5%) in heterogeneities.

  17. Modelling the spread of innovation in wild birds.

    PubMed

    Shultz, Thomas R; Montrey, Marcel; Aplin, Lucy M

    2017-06-01

    We apply three plausible algorithms in agent-based computer simulations to recent experiments on social learning in wild birds. Although some of the phenomena are simulated by all three learning algorithms, several manifestations of social conformity bias are simulated by only the approximate majority (AM) algorithm, which has roots in chemistry, molecular biology and theoretical computer science. The simulations generate testable predictions and provide several explanatory insights into the diffusion of innovation through a population. The AM algorithm's success raises the possibility of its usefulness in studying group dynamics more generally, in several different scientific domains. Our differential-equation model matches simulation results and provides mathematical insights into the dynamics of these algorithms. © 2017 The Author(s).

  18. Helium ions at the heidelberg ion beam therapy center: comparisons between FLUKA Monte Carlo code predictions and dosimetric measurements

    NASA Astrophysics Data System (ADS)

    Tessonnier, T.; Mairani, A.; Brons, S.; Sala, P.; Cerutti, F.; Ferrari, A.; Haberer, T.; Debus, J.; Parodi, K.

    2017-08-01

    In the field of particle therapy helium ion beams could offer an alternative for radiotherapy treatments, owing to their interesting physical and biological properties intermediate between protons and carbon ions. We present in this work the comparisons and validations of the Monte Carlo FLUKA code against in-depth dosimetric measurements acquired at the Heidelberg Ion Beam Therapy Center (HIT). Depth dose distributions in water with and without ripple filter, lateral profiles at different depths in water and a spread-out Bragg peak were investigated. After experimentally-driven tuning of the less known initial beam characteristics in vacuum (beam lateral size and momentum spread) and simulation parameters (water ionization potential), comparisons of depth dose distributions were performed between simulations and measurements, which showed overall good agreement with range differences below 0.1 mm and dose-weighted average dose-differences below 2.3% throughout the entire energy range. Comparisons of lateral dose profiles showed differences in full-width-half-maximum lower than 0.7 mm. Measurements of the spread-out Bragg peak indicated differences with simulations below 1% in the high dose regions and 3% in all other regions, with a range difference less than 0.5 mm. Despite the promising results, some discrepancies between simulations and measurements were observed, particularly at high energies. These differences were attributed to an underestimation of dose contributions from secondary particles at large angles, as seen in a triple Gaussian parametrization of the lateral profiles along the depth. However, the results allowed us to validate FLUKA simulations against measurements, confirming its suitability for 4He ion beam modeling in preparation of clinical establishment at HIT. Future activities building on this work will include treatment plan comparisons using validated biological models between proton and helium ions, either within a Monte Carlo treatment planning engine based on the same FLUKA code, or an independent analytical planning system fed with a validated database of inputs calculated with FLUKA.

  19. Helium ions at the heidelberg ion beam therapy center: comparisons between FLUKA Monte Carlo code predictions and dosimetric measurements.

    PubMed

    Tessonnier, T; Mairani, A; Brons, S; Sala, P; Cerutti, F; Ferrari, A; Haberer, T; Debus, J; Parodi, K

    2017-08-01

    In the field of particle therapy helium ion beams could offer an alternative for radiotherapy treatments, owing to their interesting physical and biological properties intermediate between protons and carbon ions. We present in this work the comparisons and validations of the Monte Carlo FLUKA code against in-depth dosimetric measurements acquired at the Heidelberg Ion Beam Therapy Center (HIT). Depth dose distributions in water with and without ripple filter, lateral profiles at different depths in water and a spread-out Bragg peak were investigated. After experimentally-driven tuning of the less known initial beam characteristics in vacuum (beam lateral size and momentum spread) and simulation parameters (water ionization potential), comparisons of depth dose distributions were performed between simulations and measurements, which showed overall good agreement with range differences below 0.1 mm and dose-weighted average dose-differences below 2.3% throughout the entire energy range. Comparisons of lateral dose profiles showed differences in full-width-half-maximum lower than 0.7 mm. Measurements of the spread-out Bragg peak indicated differences with simulations below 1% in the high dose regions and 3% in all other regions, with a range difference less than 0.5 mm. Despite the promising results, some discrepancies between simulations and measurements were observed, particularly at high energies. These differences were attributed to an underestimation of dose contributions from secondary particles at large angles, as seen in a triple Gaussian parametrization of the lateral profiles along the depth. However, the results allowed us to validate FLUKA simulations against measurements, confirming its suitability for 4 He ion beam modeling in preparation of clinical establishment at HIT. Future activities building on this work will include treatment plan comparisons using validated biological models between proton and helium ions, either within a Monte Carlo treatment planning engine based on the same FLUKA code, or an independent analytical planning system fed with a validated database of inputs calculated with FLUKA.

  20. Research on Ratio of Dosage of Drugs in Traditional Chinese Prescriptions by Data Mining.

    PubMed

    Yu, Xing-Wen; Gong, Qing-Yue; Hu, Kong-Fa; Mao, Wen-Jing; Zhang, Wei-Ming

    2017-01-01

    Maximizing the effectiveness of prescriptions and minimizing adverse effects of drugs is a key component of the health care of patients. In the practice of traditional Chinese medicine (TCM), it is important to provide clinicians a reference for dosing of prescribed drugs. The traditional Cheng-Church biclustering algorithm (CC) is optimized and the data of TCM prescription dose is analyzed by using the optimization algorithm. Based on an analysis of 212 prescriptions related to TCM treatment of kidney diseases, the study generated 87 prescription dose quantum matrices and each sub-matrix represents the referential value of the doses of drugs in different recipes. The optimized CC algorithm can effectively eliminate the interference of zero in the original dose matrix of TCM prescriptions and avoid zero appearing in output sub-matrix. This results in the ability to effectively analyze the reference value of drugs in different prescriptions related to kidney diseases, so as to provide valuable reference for clinicians to use drugs rationally.

  1. CT brush and CancerZap!: two video games for computed tomography dose minimization.

    PubMed

    Alvare, Graham; Gordon, Richard

    2015-05-12

    X-ray dose from computed tomography (CT) scanners has become a significant public health concern. All CT scanners spray x-ray photons across a patient, including those using compressive sensing algorithms. New technologies make it possible to aim x-ray beams where they are most needed to form a diagnostic or screening image. We have designed a computer game, CT Brush, that takes advantage of this new flexibility. It uses a standard MART algorithm (Multiplicative Algebraic Reconstruction Technique), but with a user defined dynamically selected subset of the rays. The image appears as the player moves the CT brush over an initially blank scene, with dose accumulating with every "mouse down" move. The goal is to find the "tumor" with as few moves (least dose) as possible. We have successfully implemented CT Brush in Java and made it available publicly, requesting crowdsourced feedback on improving the open source code. With this experience, we also outline a "shoot 'em up game" CancerZap! for photon limited CT. We anticipate that human computing games like these, analyzed by methods similar to those used to understand eye tracking, will lead to new object dependent CT algorithms that will require significantly less dose than object independent nonlinear and compressive sensing algorithms that depend on sprayed photons. Preliminary results suggest substantial dose reduction is achievable.

  2. Percentage depth dose calculation accuracy of model based algorithms in high energy photon small fields through heterogeneous media and comparison with plastic scintillator dosimetry

    PubMed Central

    Mani, Ganesh Kadirampatti; Karunakaran, Kaviarasu

    2016-01-01

    Small fields smaller than 4×4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model‐based algorithms, X‐ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS‐Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth‐of‐dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth‐dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1×1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1×1 cm2 field showed maximum deviation, except in 6 MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower‐density materials compared to high‐density materials. PACS numbers: 87.53.Bn, 87.53.kn, 87.56.bd, 87.55.Kd, 87.56.jf PMID:26894345

  3. Image reconstruction algorithm for optically stimulated luminescence 2D dosimetry using laser-scanned Al2O3:C and Al2O3:C,Mg films

    NASA Astrophysics Data System (ADS)

    Ahmed, M. F.; Schnell, E.; Ahmad, S.; Yukihara, E. G.

    2016-10-01

    The objective of this work was to develop an image reconstruction algorithm for 2D dosimetry using Al2O3:C and Al2O3:C,Mg optically stimulated luminescence (OSL) films imaged using a laser scanning system. The algorithm takes into account parameters associated with detector properties and the readout system. Pieces of Al2O3:C films (~8 mm  ×  8 mm  ×  125 µm) were irradiated and used to simulate dose distributions with extreme dose gradients (zero and non-zero dose regions). The OSLD film pieces were scanned using a custom-built laser-scanning OSL reader and the data obtained were used to develop and demonstrate a dose reconstruction algorithm. The algorithm includes corrections for: (a) galvo hysteresis, (b) photomultiplier tube (PMT) linearity, (c) phosphorescence, (d) ‘pixel bleeding’ caused by the 35 ms luminescence lifetime of F-centers in Al2O3, (e) geometrical distortion inherent to Galvo scanning system, and (f) position dependence of the light collection efficiency. The algorithm was also applied to 6.0 cm  ×  6.0 cm  ×  125 μm or 10.0 cm  ×  10.0 cm  ×  125 µm Al2O3:C and Al2O3:C,Mg films exposed to megavoltage x-rays (6 MV) and 12C beams (430 MeV u-1). The results obtained using pieces of irradiated films show the ability of the image reconstruction algorithm to correct for pixel bleeding even in the presence of extremely sharp dose gradients. Corrections for geometric distortion and position dependence of light collection efficiency were shown to minimize characteristic limitations of this system design. We also exemplify the application of the algorithm to more clinically relevant 6 MV x-ray beam and a 12C pencil beam, demonstrating the potential for small field dosimetry. The image reconstruction algorithm described here provides the foundation for laser-scanned OSL applied to 2D dosimetry.

  4. Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization

    PubMed Central

    Tian, Zhen; Jia, Xun; Yuan, Kehong; Pan, Tinsu; Jiang, Steve B.

    2014-01-01

    High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original total variation norm. During the reconstruction process, the pixels at edges would be gradually identified and given small penalty weight. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low contrast structures and therefore maintain acceptable spatial resolution. PMID:21860076

  5. SU-F-T-431: Dosimetric Validation of Acuros XB Algorithm for Photon Dose Calculation in Water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, L; Yadav, G; Kishore, V

    2016-06-15

    Purpose: To validate the Acuros XB algorithm implemented in Eclipse Treatment planning system version 11 (Varian Medical System, Inc., Palo Alto, CA, USA) for photon dose calculation. Methods: Acuros XB is a Linear Boltzmann transport equation (LBTE) solver that solves LBTE equation explicitly and gives result equivalent to Monte Carlo. 6MV photon beam from Varian Clinac-iX (2300CD) was used for dosimetric validation of Acuros XB. Percentage depth dose (PDD) and profiles (at dmax, 5, 10, 20 and 30 cm) measurements were performed in water for field size ranging from 2×2,4×4, 6×6, 10×10, 20×20, 30×30 and 40×40 cm{sup 2}. Acuros XBmore » results were compared against measurements and anisotropic analytical algorithm (AAA) algorithm. Results: Acuros XB result shows good agreement with measurements, and were comparable to AAA algorithm. Result for PDD and profiles shows less than one percent difference from measurements, and from calculated PDD and profiles by AAA algorithm for all field size. TPS calculated Gamma error histogram values, average gamma errors in PDD curves before dmax and after dmax were 0.28, 0.15 for Acuros XB and 0.24, 0.17 for AAA respectively, average gamma error in profile curves in central region, penumbra region and outside field region were 0.17, 0.21, 0.42 for Acuros XB and 0.10, 0.22, 0.35 for AAA respectively. Conclusion: The dosimetric validation of Acuros XB algorithms in water medium was satisfactory. Acuros XB algorithm has potential to perform photon dose calculation with high accuracy, which is more desirable for modern radiotherapy environment.« less

  6. Dose Titration Algorithm Tuning (DTAT) should supersede 'the' Maximum Tolerated Dose (MTD) in oncology dose-finding trials.

    PubMed

    Norris, David C

    2017-01-01

    Background . Absent adaptive, individualized dose-finding in early-phase oncology trials, subsequent 'confirmatory' Phase III trials risk suboptimal dosing, with resulting loss of statistical power and reduced probability of technical success for the investigational therapy. While progress has been made toward explicitly adaptive dose-finding and quantitative modeling of dose-response relationships, most such work continues to be organized around a concept of 'the' maximum tolerated dose (MTD). The purpose of this paper is to demonstrate concretely how the aim of early-phase trials might be conceived, not as 'dose-finding', but as dose titration algorithm (DTA) -finding. Methods. A Phase I dosing study is simulated, for a notional cytotoxic chemotherapy drug, with neutropenia constituting the critical dose-limiting toxicity. The drug's population pharmacokinetics and myelosuppression dynamics are simulated using published parameter estimates for docetaxel. The amenability of this model to linearization is explored empirically. The properties of a simple DTA targeting neutrophil nadir of 500 cells/mm 3 using a Newton-Raphson heuristic are explored through simulation in 25 simulated study subjects. Results. Individual-level myelosuppression dynamics in the simulation model approximately linearize under simple transformations of neutrophil concentration and drug dose. The simulated dose titration exhibits largely satisfactory convergence, with great variance in individualized optimal dosing. Some titration courses exhibit overshooting. Conclusions. The large inter-individual variability in simulated optimal dosing underscores the need to replace 'the' MTD with an individualized concept of MTD i . To illustrate this principle, the simplest possible DTA capable of realizing such a concept is demonstrated. Qualitative phenomena observed in this demonstration support discussion of the notion of tuning such algorithms. Although here illustrated specifically in relation to cytotoxic chemotherapy, the DTAT principle appears similarly applicable to Phase I studies of cancer immunotherapy and molecularly targeted agents.

  7. A Random Access Algorithm for Frequency Hopped Spread Spectrum Packet Radio Networks

    DTIC Science & Technology

    1986-03-01

    in section 4.3, the probability p is given by the following expression. N rj 1-p - N1 (NR7-1). f -• (20)~ j=o j. F,=ji < N We now present a...Algorithms," Univ. of Connecticut, EECS Dept. Technical Report UCT/ DEECS /TR-86-1, January 1986. Also, submitted for publication. 119] W. Peterson and E

  8. Phase I study of continuous MKC-1 in patients with advanced or metastatic solid malignancies using the modified Time-to-Event Continual Reassessment Method (TITE-CRM) dose escalation design.

    PubMed

    Tevaarwerk, Amye; Wilding, George; Eickhoff, Jens; Chappell, Rick; Sidor, Carolyn; Arnott, Jamie; Bailey, Howard; Schelman, William; Liu, Glenn

    2012-06-01

    MKC-1 is an oral cell-cycle inhibitor with broad antitumor activity in preclinical models. Clinical studies demonstrated modest antitumor activity using intermittent dosing schedule, however additional preclinical data suggested continuous dosing could be efficacious with additional effects against the mTor/AKT pathway. The primary objectives were to determine the maximum tolerated dose (MTD) and response of continuous MKC-1. Secondary objectives included characterizing the dose limiting toxicities (DLTs) and pharmacokinetics (PK). Patients with solid malignancies were eligible, if they had measurable disease, ECOG PS ≤1, and adequate organ function. Exclusions included brain metastases and inability to receive oral drug. MKC-1 was dosed twice daily, continuously in 28-day cycles. Other medications were eliminated if there were possible drug interactions. Doses were assigned using a TITE-CRM algorithm following enrollment of the first 3 pts. Disease response was assessed every 8 weeks. Between 5/08-9/09, 24 patients enrolled (15 M/9 F, median 58 years, range 44-77). Patients 1-3 received 120 mg/d of MKC-1; patients 4-24 were dosed per the TITE-CRM algorithm: 150 mg [n = 1], 180 [2], 200 [1], 230 [1], 260 [5], 290 [6], 320 [5]. The median time on drug was 8 weeks (range 4-28). The only DLT occurred at 320 mg (grade 3 fatigue). Stable disease occurred at 150 mg/d (28 weeks; RCC) and 320 mg/d (16 weeks; breast, parotid). Escalation halted at 320 mg/d. Day 28 pharmacokinetics indicated absorption and active metabolites. Continuous MKC-1 was well-tolerated; there were no RECIST responses, although clinical benefit occurred in 3/24 pts. Dose escalation stopped at 320 mg/d, and this is the MTD as defined by the CRM dose escalation algorithm; this cumulative dose/cycle exceeds that determined from intermittent dosing studies. A TITE-CRM allowed for rapid dose escalation and was able to account for late toxicities with continuous dosing via a modified algorithm.

  9. TU-F-BRF-03: Effect of Radiation Therapy Planning Scan Registration On the Dose in Lung Cancer Patient CT Scans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cunliffe, A; Contee, C; White, B

    Purpose: To characterize the effect of deformable registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60Gy, 2Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pre-therapy (4–75 days) CT scan and a treatment planning scan with an associated dose map calculated in Pinnacle were collected. To establish baseline correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pre-therapy scans were co-registered with planning scans (and associated dose maps)more » using the Plastimatch demons and Fraunhofer MEVIS deformable registration algorithms. Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from both registration algorithms. The absolute difference in planned dose (|ΔD|) between manually and automatically mapped landmark points was calculated. Using regression modeling, |ΔD| was modeled as a function of the distance between manually and automatically matched points (registration error, E), the dose standard deviation (SD-dose) in the eight-pixel neighborhood, and the registration algorithm used. Results: 52–92 landmark point pairs (median: 82) were identified in each patient's scans. Average |ΔD| across patients was 3.66Gy (range: 1.2–7.2Gy). |ΔD| was significantly reduced by 0.53Gy using Plastimatch demons compared with Fraunhofer MEVIS. |ΔD| increased significantly as a function of E (0.39Gy/mm) and SD-dose (2.23Gy/Gy). Conclusion: An average error of <4Gy in radiation dose was introduced when points were mapped between CT scan pairs using deformable registration. Dose differences following registration were significantly increased when the Fraunhofer MEVIS registration algorithm was used, spatial registration errors were larger, and dose gradient was higher (i.e., higher SD-dose). To our knowledge, this is the first study to directly compute dose errors following deformable registration of lung CT scans.« less

  10. Estimation of internal organ motion-induced variance in radiation dose in non-gated radiotherapy

    NASA Astrophysics Data System (ADS)

    Zhou, Sumin; Zhu, Xiaofeng; Zhang, Mutian; Zheng, Dandan; Lei, Yu; Li, Sicong; Bennion, Nathan; Verma, Vivek; Zhen, Weining; Enke, Charles

    2016-12-01

    In the delivery of non-gated radiotherapy (RT), owing to intra-fraction organ motion, a certain degree of RT dose uncertainty is present. Herein, we propose a novel mathematical algorithm to estimate the mean and variance of RT dose that is delivered without gating. These parameters are specific to individual internal organ motion, dependent on individual treatment plans, and relevant to the RT delivery process. This algorithm uses images from a patient’s 4D simulation study to model the actual patient internal organ motion during RT delivery. All necessary dose rate calculations are performed in fixed patient internal organ motion states. The analytical and deterministic formulae of mean and variance in dose from non-gated RT were derived directly via statistical averaging of the calculated dose rate over possible random internal organ motion initial phases, and did not require constructing relevant histograms. All results are expressed in dose rate Fourier transform coefficients for computational efficiency. Exact solutions are provided to simplified, yet still clinically relevant, cases. Results from a volumetric-modulated arc therapy (VMAT) patient case are also presented. The results obtained from our mathematical algorithm can aid clinical decisions by providing information regarding both mean and variance of radiation dose to non-gated patients prior to RT delivery.

  11. The characteristics of dose at mass interface on lung cancer Stereotactic Body Radiotherapy (SBRT) simulation

    NASA Astrophysics Data System (ADS)

    Wulansari, I. H.; Wibowo, W. E.; Pawiro, S. A.

    2017-05-01

    In lung cancer cases, there exists a difficulty for the Treatment Planning System (TPS) to predict the dose at or near the mass interface. This error prediction might influence the minimum or maximum dose received by lung cancer. In addition to target motion, the target dose prediction error also contributes in the combined error during the course of treatment. The objective of this work was to verify dose plan calculated by adaptive convolution algorithm in Pinnacle3 at the mass interface against a set of measurement. The measurement was performed using Gafchromic EBT 3 film in static and dynamic CIRS phantom with amplitudes of 5 mm, 10 mm, and 20 mm in superior-inferior motion direction. Static and dynamic phantom were scanned with fast CT and slow CT before planned. The results showed that adaptive convolution algorithm mostly predicted mass interface dose lower than the measured dose in a range of -0,63% to 8,37% for static phantom in fast CT scanning and -0,27% to 15,9% for static phantom in slow CT scanning. In dynamic phantom, this algorithm was predicted mass interface dose higher than measured dose up to -89% for fast CT and varied from -17% until 37% for slow CT. This interface of dose differences caused the dose mass decreased in fast CT, except for 10 mm motion amplitude, and increased in slow CT for the greater amplitude of motion.

  12. TH-E-BRE-07: Development of Dose Calculation Error Predictors for a Widely Implemented Clinical Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egan, A; Laub, W

    2014-06-15

    Purpose: Several shortcomings of the current implementation of the analytic anisotropic algorithm (AAA) may lead to dose calculation errors in highly modulated treatments delivered to highly heterogeneous geometries. Here we introduce a set of dosimetric error predictors that can be applied to a clinical treatment plan and patient geometry in order to identify high risk plans. Once a problematic plan is identified, the treatment can be recalculated with more accurate algorithm in order to better assess its viability. Methods: Here we focus on three distinct sources dosimetric error in the AAA algorithm. First, due to a combination of discrepancies inmore » smallfield beam modeling as well as volume averaging effects, dose calculated through small MLC apertures can be underestimated, while that behind small MLC blocks can overestimated. Second, due the rectilinear scaling of the Monte Carlo generated pencil beam kernel, energy is not properly transported through heterogeneities near, but not impeding, the central axis of the beamlet. And third, AAA overestimates dose in regions very low density (< 0.2 g/cm{sup 3}). We have developed an algorithm to detect the location and magnitude of each scenario within the patient geometry, namely the field-size index (FSI), the heterogeneous scatter index (HSI), and the lowdensity index (LDI) respectively. Results: Error indices successfully identify deviations between AAA and Monte Carlo dose distributions in simple phantom geometries. Algorithms are currently implemented in the MATLAB computing environment and are able to run on a typical RapidArc head and neck geometry in less than an hour. Conclusion: Because these error indices successfully identify each type of error in contrived cases, with sufficient benchmarking, this method can be developed into a clinical tool that may be able to help estimate AAA dose calculation errors and when it might be advisable to use Monte Carlo calculations.« less

  13. Technical Note: A novel leaf sequencing optimization algorithm which considers previous underdose and overdose events for MLC tracking radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wisotzky, Eric, E-mail: eric.wisotzky@charite.de, E-mail: eric.wisotzky@ipk.fraunhofer.de; O’Brien, Ricky; Keall, Paul J., E-mail: paul.keall@sydney.edu.au

    2016-01-15

    Purpose: Multileaf collimator (MLC) tracking radiotherapy is complex as the beam pattern needs to be modified due to the planned intensity modulation as well as the real-time target motion. The target motion cannot be planned; therefore, the modified beam pattern differs from the original plan and the MLC sequence needs to be recomputed online. Current MLC tracking algorithms use a greedy heuristic in that they optimize for a given time, but ignore past errors. To overcome this problem, the authors have developed and improved an algorithm that minimizes large underdose and overdose regions. Additionally, previous underdose and overdose events aremore » taken into account to avoid regions with high quantity of dose events. Methods: The authors improved the existing MLC motion control algorithm by introducing a cumulative underdose/overdose map. This map represents the actual projection of the planned tumor shape and logs occurring dose events at each specific regions. These events have an impact on the dose cost calculation and reduce recurrence of dose events at each region. The authors studied the improvement of the new temporal optimization algorithm in terms of the L1-norm minimization of the sum of overdose and underdose compared to not accounting for previous dose events. For evaluation, the authors simulated the delivery of 5 conformal and 14 intensity-modulated radiotherapy (IMRT)-plans with 7 3D patient measured tumor motion traces. Results: Simulations with conformal shapes showed an improvement of L1-norm up to 8.5% after 100 MLC modification steps. Experiments showed comparable improvements with the same type of treatment plans. Conclusions: A novel leaf sequencing optimization algorithm which considers previous dose events for MLC tracking radiotherapy has been developed and investigated. Reductions in underdose/overdose are observed for conformal and IMRT delivery.« less

  14. Dosimetric comparison of peripheral NSCLC SBRT using Acuros XB and AAA calculation algorithms.

    PubMed

    Ong, Chloe C H; Ang, Khong Wei; Soh, Roger C X; Tin, Kah Ming; Yap, Jerome H H; Lee, James C L; Bragg, Christopher M

    2017-01-01

    There is a concern for dose calculation in highly heterogenous environments such as the thorax region. This study compares the quality of treatment plans of peripheral non-small cell lung cancer (NSCLC) stereotactic body radiation therapy (SBRT) using 2 calculation algorithms, namely, Eclipse Anisotropic Analytical Algorithm (AAA) and Acuros External Beam (AXB), for 3-dimensional conformal radiation therapy (3DCRT) and volumetric-modulated arc therapy (VMAT). Four-dimensional computed tomography (4DCT) data from 20 anonymized patients were studied using Varian Eclipse planning system, AXB, and AAA version 10.0.28. A 3DCRT plan and a VMAT plan were generated using AAA and AXB with constant plan parameters for each patient. The prescription and dose constraints were benchmarked against Radiation Therapy Oncology Group (RTOG) 0915 protocol. Planning parameters of the plan were compared statistically using Mann-Whitney U tests. Results showed that 3DCRT and VMAT plans have a lower target coverage up to 8% when calculated using AXB as compared with AAA. The conformity index (CI) for AXB plans was 4.7% lower than AAA plans, but was closer to unity, which indicated better target conformity. AXB produced plans with global maximum doses which were, on average, 2% hotter than AAA plans. Both 3DCRT and VMAT plans were able to achieve D95%. VMAT plans were shown to be more conformal (CI = 1.01) and were at least 3.2% and 1.5% lower in terms of PTV maximum and mean dose, respectively. There was no statistically significant difference for doses received by organs at risk (OARs) regardless of calculation algorithms and treatment techniques. In general, the difference in tissue modeling for AXB and AAA algorithm is responsible for the dose distribution between the AXB and the AAA algorithms. The AXB VMAT plans could be used to benefit patients receiving peripheral NSCLC SBRT. Copyright © 2017 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  15. Towards quantitative imaging: stability of fully automated nodule segmentation across varied dose levels and reconstruction parameters in a low-dose CT screening patient cohort

    NASA Astrophysics Data System (ADS)

    Wahi-Anwar, M. Wasil; Emaminejad, Nastaran; Hoffman, John; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael F.

    2018-02-01

    Quantitative imaging in lung cancer CT seeks to characterize nodules through quantitative features, usually from a region of interest delineating the nodule. The segmentation, however, can vary depending on segmentation approach and image quality, which can affect the extracted feature values. In this study, we utilize a fully-automated nodule segmentation method - to avoid reader-influenced inconsistencies - to explore the effects of varied dose levels and reconstruction parameters on segmentation. Raw projection CT images from a low-dose screening patient cohort (N=59) were reconstructed at multiple dose levels (100%, 50%, 25%, 10%), two slice thicknesses (1.0mm, 0.6mm), and a medium kernel. Fully-automated nodule detection and segmentation was then applied, from which 12 nodules were selected. Dice similarity coefficient (DSC) was used to assess the similarity of the segmentation ROIs of the same nodule across different reconstruction and dose conditions. Nodules at 1.0mm slice thickness and dose levels of 25% and 50% resulted in DSC values greater than 0.85 when compared to 100% dose, with lower dose leading to a lower average and wider spread of DSC values. At 0.6mm, the increased bias and wider spread of DSC values from lowering dose were more pronounced. The effects of dose reduction on DSC for CAD-segmented nodules were similar in magnitude to reducing the slice thickness from 1.0mm to 0.6mm. In conclusion, variation of dose and slice thickness can result in very different segmentations because of noise and image quality. However, there exists some stability in segmentation overlap, as even at 1mm, an image with 25% of the lowdose scan still results in segmentations similar to that seen in a full-dose scan.

  16. Efficiency and effectiveness of the use of an acenocoumarol pharmacogenetic dosing algorithm versus usual care in patients with venous thromboembolic disease initiating oral anticoagulation: study protocol for a randomized controlled trial

    PubMed Central

    2012-01-01

    Background Hemorrhagic events are frequent in patients on treatment with antivitamin-K oral anticoagulants due to their narrow therapeutic margin. Studies performed with acenocoumarol have shown the relationship between demographic, clinical and genotypic variants and the response to these drugs. Once the influence of these genetic and clinical factors on the dose of acenocoumarol needed to maintain a stable international normalized ratio (INR) has been demonstrated, new strategies need to be developed to predict the appropriate doses of this drug. Several pharmacogenetic algorithms have been developed for warfarin, but only three have been developed for acenocoumarol. After the development of a pharmacogenetic algorithm, the obvious next step is to demonstrate its effectiveness and utility by means of a randomized controlled trial. The aim of this study is to evaluate the effectiveness and efficiency of an acenocoumarol dosing algorithm developed by our group which includes demographic, clinical and pharmacogenetic variables (VKORC1, CYP2C9, CYP4F2 and ApoE) in patients with venous thromboembolism (VTE). Methods and design This is a multicenter, single blind, randomized controlled clinical trial. The protocol has been approved by La Paz University Hospital Research Ethics Committee and by the Spanish Drug Agency. Two hundred and forty patients with VTE in which oral anticoagulant therapy is indicated will be included. Randomization (case/control 1:1) will be stratified by center. Acenocoumarol dose in the control group will be scheduled and adjusted following common clinical practice; in the experimental arm dosing will be following an individualized algorithm developed and validated by our group. Patients will be followed for three months. The main endpoints are: 1) Percentage of patients with INR within the therapeutic range on day seven after initiation of oral anticoagulant therapy; 2) Time from the start of oral anticoagulant treatment to achievement of a stable INR within the therapeutic range; 3) Number of INR determinations within the therapeutic range in the first six weeks of treatment. Discussion To date, there are no clinical trials comparing pharmacogenetic acenocoumarol dosing algorithm versus routine clinical practice in VTE. Implementation of this pharmacogenetic algorithm in the clinical practice routine could reduce side effects and improve patient safety. Trial registration Eudra CT. Identifier: 2009-016643-18. PMID:23237631

  17. Quantitative Features of Liver Lesions, Lung Nodules, and Renal Stones at Multi-Detector Row CT Examinations: Dependency on Radiation Dose and Reconstruction Algorithm.

    PubMed

    Solomon, Justin; Mileto, Achille; Nelson, Rendon C; Roy Choudhury, Kingshuk; Samei, Ehsan

    2016-04-01

    To determine if radiation dose and reconstruction algorithm affect the computer-based extraction and analysis of quantitative imaging features in lung nodules, liver lesions, and renal stones at multi-detector row computed tomography (CT). Retrospective analysis of data from a prospective, multicenter, HIPAA-compliant, institutional review board-approved clinical trial was performed by extracting 23 quantitative imaging features (size, shape, attenuation, edge sharpness, pixel value distribution, and texture) of lesions on multi-detector row CT images of 20 adult patients (14 men, six women; mean age, 63 years; range, 38-72 years) referred for known or suspected focal liver lesions, lung nodules, or kidney stones. Data were acquired between September 2011 and April 2012. All multi-detector row CT scans were performed at two different radiation dose levels; images were reconstructed with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction (MBIR) algorithms. A linear mixed-effects model was used to assess the effect of radiation dose and reconstruction algorithm on extracted features. Among the 23 imaging features assessed, radiation dose had a significant effect on five, three, and four of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). Adaptive statistical iterative reconstruction had a significant effect on three, one, and one of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). MBIR reconstruction had a significant effect on nine, 11, and 15 of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). Of note, the measured size of lung nodules and renal stones with MBIR was significantly different than those for the other two algorithms (P < .002 for all comparisons). Although lesion texture was significantly affected by the reconstruction algorithm used (average of 3.33 features affected by MBIR throughout lesion types; P < .002, for all comparisons), no significant effect of the radiation dose setting was observed for all but one of the texture features (P = .002-.998). Radiation dose settings and reconstruction algorithms affect the extraction and analysis of quantitative imaging features in lesions at multi-detector row CT.

  18. Towards Data-Driven Simulations of Wildfire Spread using Ensemble-based Data Assimilation

    NASA Astrophysics Data System (ADS)

    Rochoux, M. C.; Bart, J.; Ricci, S. M.; Cuenot, B.; Trouvé, A.; Duchaine, F.; Morel, T.

    2012-12-01

    Real-time predictions of a propagating wildfire remain a challenging task because the problem involves both multi-physics and multi-scales. The propagation speed of wildfires, also called the rate of spread (ROS), is indeed determined by complex interactions between pyrolysis, combustion and flow dynamics, atmospheric dynamics occurring at vegetation, topographical and meteorological scales. Current operational fire spread models are mainly based on a semi-empirical parameterization of the ROS in terms of vegetation, topographical and meteorological properties. For the fire spread simulation to be predictive and compatible with operational applications, the uncertainty on the ROS model should be reduced. As recent progress made in remote sensing technology provides new ways to monitor the fire front position, a promising approach to overcome the difficulties found in wildfire spread simulations is to integrate fire modeling and fire sensing technologies using data assimilation (DA). For this purpose we have developed a prototype data-driven wildfire spread simulator in order to provide optimal estimates of poorly known model parameters [*]. The data-driven simulation capability is adapted for more realistic wildfire spread : it considers a regional-scale fire spread model that is informed by observations of the fire front location. An Ensemble Kalman Filter algorithm (EnKF) based on a parallel computing platform (OpenPALM) was implemented in order to perform a multi-parameter sequential estimation where wind magnitude and direction are in addition to vegetation properties (see attached figure). The EnKF algorithm shows its good ability to track a small-scale grassland fire experiment and ensures a good accounting for the sensitivity of the simulation outcomes to the control parameters. As a conclusion, it was shown that data assimilation is a promising approach to more accurately forecast time-varying wildfire spread conditions as new airborne-like observations of the fire front location get available. [*] Rochoux, M.C., Delmotte, B., Cuenot, B., Ricci, S., and Trouvé, A. (2012) "Regional-scale simulations of wildland fire spread informed by real-time flame front observations", Proc. Combust. Inst., 34, in press http://dx.doi.org/10.1016/j.proci.2012.06.090 EnKF-based tracking of small-scale grassland fire experiment, with estimation of wind and fuel parameters.

  19. Improving the Estimation of Mealtime Insulin Dose in Adults With Type 1 Diabetes

    PubMed Central

    Bao, Jiansong; Gilbertson, Heather R.; Gray, Robyn; Munns, Diane; Howard, Gabrielle; Petocz, Peter; Colagiuri, Stephen; Brand-Miller, Jennie C.

    2011-01-01

    OBJECTIVE Although carbohydrate counting is routine practice in type 1 diabetes, hyperglycemic episodes are common. A food insulin index (FII) has been developed and validated for predicting the normal insulin demand generated by mixed meals in healthy adults. We sought to compare a novel algorithm on the basis of the FII for estimating mealtime insulin dose with carbohydrate counting in adults with type 1 diabetes. RESEARCH DESIGN AND METHODS A total of 28 patients using insulin pump therapy consumed two different breakfast meals of equal energy, glycemic index, fiber, and calculated insulin demand (both FII = 60) but approximately twofold difference in carbohydrate content, in random order on three consecutive mornings. On one occasion, a carbohydrate-counting algorithm was applied to meal A (75 g carbohydrate) for determining bolus insulin dose. On the other two occasions, carbohydrate counting (about half the insulin dose as meal A) and the FII algorithm (same dose as meal A) were applied to meal B (41 g carbohydrate). A real-time continuous glucose monitor was used to assess 3-h postprandial glycemia. RESULTS Compared with carbohydrate counting, the FII algorithm significantly decreased glucose incremental area under the curve over 3 h (–52%, P = 0.013) and peak glucose excursion (–41%, P = 0.01) and improved the percentage of time within the normal blood glucose range (4–10 mmol/L) (31%, P = 0.001). There was no significant difference in the occurrence of hypoglycemia. CONCLUSIONS An insulin algorithm based on physiological insulin demand evoked by foods in healthy subjects may be a useful tool for estimating mealtime insulin dose in patients with type 1 diabetes. PMID:21949219

  20. Comparison of the effect of plasma treatment and gamma ray irradiation on PS-Cu nanocomposite films surface

    NASA Astrophysics Data System (ADS)

    Farag, O. F.

    2018-06-01

    Polystyrene-copper (PS-Cu) nanocomposite films were treated with DC N2 plasma and gamma rays irradiations. The plasma treatment of PS-Cu film surface was carried out at different treatment times, gas pressure 0.4 Torr and the applied power 3.5 W. On the other hand, the treatment with gamma rays irradiation were carried out at irradiation doses 10, 30 and 50 kGy. The induced changes in surface properties of PS-Cu films were investigated with UV-viss spectroscopy, scanning electron microscopy (SEM) and FTIR spectroscopy techniques. In addition, the wettability property, surface free energy, spreading coefficient and surface roughness of the treated samples were studied by measuring the contact angle. The UV-viss spectroscopy analysis revealed that the optical band gap decreases with increasing the treatment time and the irradiation dose for plasma and gamma treatments, respectively. SEM observations showed that the particle size of copper particles was increased with increasing the treatment time and the irradiation dose, but gamma treatment changes the copper particles size from nano scale to micro scale. The contact angle measurements showing that the wettability property, surface free energy, spreading coefficient and surface roughness of the treated PS-Cu samples were increased remarkably with increasing the treatment time and the irradiation dose for plasma and gamma treatments, respectively. The contact angle, surface free energy, spreading coefficient and surface roughness of the treated PS-Cu samples are more influenced by plasma treatment than gamma treatment.

  1. SU-F-T-619: Dose Evaluation of Specific Patient Plans Based On Monte Carlo Algorithm for a CyberKnife Stereotactic Radiosurgery System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piao, J; PLA 302 Hospital, Beijing; Xu, S

    2016-06-15

    Purpose: This study will use Monte Carlo to simulate the Cyberknife system, and intend to develop the third-party tool to evaluate the dose verification of specific patient plans in TPS. Methods: By simulating the treatment head using the BEAMnrc and DOSXYZnrc software, the comparison between the calculated and measured data will be done to determine the beam parameters. The dose distribution calculated in the Raytracing, Monte Carlo algorithms of TPS (Multiplan Ver4.0.2) and in-house Monte Carlo simulation method for 30 patient plans, which included 10 head, lung and liver cases in each, were analyzed. The γ analysis with the combinedmore » 3mm/3% criteria would be introduced to quantitatively evaluate the difference of the accuracy between three algorithms. Results: More than 90% of the global error points were less than 2% for the comparison of the PDD and OAR curves after determining the mean energy and FWHM.The relative ideal Monte Carlo beam model had been established. Based on the quantitative evaluation of dose accuracy for three algorithms, the results of γ analysis shows that the passing rates (84.88±9.67% for head,98.83±1.05% for liver,98.26±1.87% for lung) of PTV in 30 plans between Monte Carlo simulation and TPS Monte Carlo algorithms were good. And the passing rates (95.93±3.12%,99.84±0.33% in each) of PTV in head and liver plans between Monte Carlo simulation and TPS Ray-tracing algorithms were also good. But the difference of DVHs in lung plans between Monte Carlo simulation and Ray-tracing algorithms was obvious, and the passing rate (51.263±38.964%) of γ criteria was not good. It is feasible that Monte Carlo simulation was used for verifying the dose distribution of patient plans. Conclusion: Monte Carlo simulation algorithm developed in the CyberKnife system of this study can be used as a reference tool for the third-party tool, which plays an important role in dose verification of patient plans. This work was supported in part by the grant from Chinese Natural Science Foundation (Grant No. 11275105). Thanks for the support from Accuray Corp.« less

  2. SU-E-T-202: Impact of Monte Carlo Dose Calculation Algorithm On Prostate SBRT Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venencia, C; Garrigo, E; Cardenas, J

    2014-06-01

    Purpose: The purpose of this work was to quantify the dosimetric impact of using Monte Carlo algorithm on pre calculated SBRT prostate treatment with pencil beam dose calculation algorithm. Methods: A 6MV photon beam produced by a Novalis TX (BrainLAB-Varian) linear accelerator equipped with HDMLC was used. Treatment plans were done using 9 fields with Iplanv4.5 (BrainLAB) and dynamic IMRT modality. Institutional SBRT protocol uses a total dose to the prostate of 40Gy in 5 fractions, every other day. Dose calculation is done by pencil beam (2mm dose resolution), heterogeneity correction and dose volume constraint (UCLA) for PTV D95%=40Gy andmore » D98%>39.2Gy, Rectum V20Gy<50%, V32Gy<20%, V36Gy<10% and V40Gy<5%, Bladder V20Gy<40% and V40Gy<10%, femoral heads V16Gy<5%, penile bulb V25Gy<3cc, urethra and overlap region between PTV and PRV Rectum Dmax<42Gy. 10 SBRT treatments plans were selected and recalculated using Monte Carlo with 2mm spatial resolution and mean variance of 2%. DVH comparisons between plans were done. Results: The average difference between PTV doses constraints were within 2%. However 3 plans have differences higher than 3% which does not meet the D98% criteria (>39.2Gy) and should have been renormalized. Dose volume constraint differences for rectum, bladder, femoral heads and penile bulb were les than 2% and within tolerances. Urethra region and overlapping between PTV and PRV Rectum shows increment of dose in all plans. The average difference for urethra region was 2.1% with a maximum of 7.8% and for the overlapping region 2.5% with a maximum of 8.7%. Conclusion: Monte Carlo dose calculation on dynamic IMRT treatments could affects on plan normalization. Dose increment in critical region of urethra and PTV overlapping region with PTV could have clinical consequences which need to be studied. The use of Monte Carlo dose calculation algorithm is limited because inverse planning dose optimization use only pencil beam.« less

  3. Validation of a method for in vivo 3D dose reconstruction for IMRT and VMAT treatments using on-treatment EPID images and a model-based forward-calculation algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Uytven, Eric, E-mail: eric.vanuytven@cancercare.mb.ca; Van Beek, Timothy; McCowan, Peter M.

    2015-12-15

    Purpose: Radiation treatments are trending toward delivering higher doses per fraction under stereotactic radiosurgery and hypofractionated treatment regimens. There is a need for accurate 3D in vivo patient dose verification using electronic portal imaging device (EPID) measurements. This work presents a model-based technique to compute full three-dimensional patient dose reconstructed from on-treatment EPID portal images (i.e., transmission images). Methods: EPID dose is converted to incident fluence entering the patient using a series of steps which include converting measured EPID dose to fluence at the detector plane and then back-projecting the primary source component of the EPID fluence upstream of themore » patient. Incident fluence is then recombined with predicted extra-focal fluence and used to calculate 3D patient dose via a collapsed-cone convolution method. This method is implemented in an iterative manner, although in practice it provides accurate results in a single iteration. The robustness of the dose reconstruction technique is demonstrated with several simple slab phantom and nine anthropomorphic phantom cases. Prostate, head and neck, and lung treatments are all included as well as a range of delivery techniques including VMAT and dynamic intensity modulated radiation therapy (IMRT). Results: Results indicate that the patient dose reconstruction algorithm compares well with treatment planning system computed doses for controlled test situations. For simple phantom and square field tests, agreement was excellent with a 2%/2 mm 3D chi pass rate ≥98.9%. On anthropomorphic phantoms, the 2%/2 mm 3D chi pass rates ranged from 79.9% to 99.9% in the planning target volume (PTV) region and 96.5% to 100% in the low dose region (>20% of prescription, excluding PTV and skin build-up region). Conclusions: An algorithm to reconstruct delivered patient 3D doses from EPID exit dosimetry measurements was presented. The method was applied to phantom and patient data sets, as well as for dynamic IMRT and VMAT delivery techniques. Results indicate that the EPID dose reconstruction algorithm presented in this work is suitable for clinical implementation.« less

  4. A class solution for volumetric-modulated arc therapy planning in postprostatectomy radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forde, Elizabeth, E-mail: eforde@tcd.ie; Bromley, Regina; Institute of Medical Physics, School of Physics, University of Sydney, New South Wales

    This study is aimed to test a postprostatectomy volumetric-modulated arc therapy (VMAT) planning class solution. The solution applies to both the progressive resolution optimizer algorithm version 2 (PRO 2) and the algorithm version 3 (PRO 3), addressing the effect of an upgraded algorithm. A total of 10 radical postprostatectomy patients received 68 Gy to 95% of the planning target volume (PTV), which was planned using VMAT. Each case followed a set of planning instructions; including contouring, field setup, and predetermined optimization parameters. Each case was run through both algorithms only once, with no user interaction. Results were averaged and comparedmore » against Radiation Therapy Oncology Group (RTOG) 0534 end points. In addition, the clinical target volume (CTV) D{sub 100}, PTV D{sub 99}, and PTV mean doses were recorded, along with conformity indices (CIs) (95% and 98%) and the homogeneity index. All cases satisfied PTV D{sub 95} of 68 Gy and a maximum dose < 74.8 Gy. The average result for the PTV D{sub 99} was 64.1 Gy for PRO 2 and 62.1 Gy for PRO 3. The average PTV mean dose for PRO 2 was 71.4 Gy and 71.5 Gy for PRO 3. The CTV D{sub 100} average dose was 67.7 and 68.0 Gy for PRO 2 and PRO 3, respectively. The mean homogeneity index for both algorithms was 0.08. The average 95% CI was 1.17 for PRO 2 and 1.19 for PRO 3. For 98%, the average results were 1.08 and 1.12 for PRO 2 and PRO 3, respectively. All cases for each algorithm met the RTOG organs at risk dose constraints. A successful class solution has been established for prostate bed VMAT radiotherapy regardless of the algorithm used.« less

  5. TU-D-209-03: Alignment of the Patient Graphic Model Using Fluoroscopic Images for Skin Dose Mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oines, A; Oines, A; Kilian-Meneghin, J

    2016-06-15

    Purpose: The Dose Tracking System (DTS) was developed to provide realtime feedback of skin dose and dose rate during interventional fluoroscopic procedures. A color map on a 3D graphic of the patient represents the cumulative dose distribution on the skin. Automated image correlation algorithms are described which use the fluoroscopic procedure images to align and scale the patient graphic for more accurate dose mapping. Methods: Currently, the DTS employs manual patient graphic selection and alignment. To improve the accuracy of dose mapping and automate the software, various methods are explored to extract information about the beam location and patient morphologymore » from the procedure images. To match patient anatomy with a reference projection image, preprocessing is first used, including edge enhancement, edge detection, and contour detection. Template matching algorithms from OpenCV are then employed to find the location of the beam. Once a match is found, the reference graphic is scaled and rotated to fit the patient, using image registration correlation functions in Matlab. The algorithm runs correlation functions for all points and maps all correlation confidences to a surface map. The highest point of correlation is used for alignment and scaling. The transformation data is saved for later model scaling. Results: Anatomic recognition is used to find matching features between model and image and image registration correlation provides for alignment and scaling at any rotation angle with less than onesecond runtime, and at noise levels in excess of 150% of those found in normal procedures. Conclusion: The algorithm provides the necessary scaling and alignment tools to improve the accuracy of dose distribution mapping on the patient graphic with the DTS. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less

  6. Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning

    NASA Astrophysics Data System (ADS)

    Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.

    2008-02-01

    Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.

  7. SU-F-T-236: Comparison of Two IMRT/VMAT QA Systems Using Gamma Index Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dogan, N; Denissova, S

    2016-06-15

    Purpose: The goal of this study is to assess differences in the Gamma index pass rates when using two commercial QA systems and provide optimum Gamma index parameters for pre-treatment patient specific QA. Methods: Twenty-two VMAT cases that consisted of prostate, lung, head and neck, spine, brain and pancreas, were included in this study. The verification plans have been calculated using AcurosXB(V11) algorithm for different dose grids (1.5mm, 2.5mm, 3mm). The measurements were performed on TrueBeam(Varian) accelerator using both EPID(S1000) portal imager and ArcCheck(SunNuclearCorp) devices. Gamma index criteria variation of 3%/3mm, 2%/3mm, 2%/2mm and threshold (TH) doses of 5% tomore » 50% were used in analysis. Results: The differences in Gamma pass rates between two devices are not statistically significant for 3%/3mm, yielding pass rate higher than 95%. Increase of lower dose TH showed reduced pass rates for both devices. ArcCheck’s more pronounced effect can be attributed to higher contribution of lower dose region spread. As expected, tightening criteria to 2%/2mm (TH: 10%) decreased Gamma pass rates below 95%. Higher EPID (92%) pass rates compared to ArcCheck (86%) probably due to better spatial resolution. Portal Dosimetry results showed lower Gamma pass rates for composite plans compared to individual field pass rates. This may be due to the expansion in the analyzed region which includes pixels not included in the separate field analysis. Decreasing dose grid size from 2.5mm to 1.5mm did not show statistically significant (p<0.05) differences in Gamma pass rates for both QA devices. Conclusion: Overall, both system measurements agree well with calculated dose when using gamma index criteria of 3%/3mm for a variety of VMAT cases. Variability between two systems increases using different dose GRID, TH and tighter gamma criteria and must be carefully assessed prior to clinical use.« less

  8. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    NASA Astrophysics Data System (ADS)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  9. Low dose CT reconstruction via L1 norm dictionary learning using alternating minimization algorithm and balancing principle.

    PubMed

    Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin

    2018-04-18

    Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.

  10. A fast method to emulate an iterative POCS image reconstruction algorithm.

    PubMed

    Zeng, Gengsheng L

    2017-10-01

    Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.

  11. Statistic and dosimetric criteria to assess the shift of the prescribed dose for lung radiotherapy plans when integrating point kernel models in medical physics: are we ready?

    PubMed

    Chaikh, Abdulhamid; Balosso, Jacques

    2016-12-01

    To apply the statistical bootstrap analysis and dosimetric criteria's to assess the change of prescribed dose (PD) for lung cancer to maintain the same clinical results when using new generations of dose calculation algorithms. Nine lung cancer cases were studied. For each patient, three treatment plans were generated using exactly the same beams arrangements. In plan 1, the dose was calculated using pencil beam convolution (PBC) algorithm turning on heterogeneity correction with modified batho (PBC-MB). In plan 2, the dose was calculated using anisotropic analytical algorithm (AAA) and the same PD, as plan 1. In plan 3, the dose was calculated using AAA with monitor units (MUs) obtained from PBC-MB, as input. The dosimetric criteria's include MUs, delivered dose at isocentre (Diso) and calculated dose to 95% of the target volume (D95). The bootstrap method was used to assess the significance of the dose differences and to accurately estimate the 95% confidence interval (95% CI). Wilcoxon and Spearman's rank tests were used to calculate P values and the correlation coefficient (ρ). Statistically significant for dose difference was found using point kernel model. A good correlation was observed between both algorithms types, with ρ>0.9. Using AAA instead of PBC-MB, an adjustment of the PD in the isocentre is suggested. For a given set of patients, we assessed the need to readjust the PD for lung cancer using dosimetric indices and bootstrap statistical method. Thus, if the goal is to keep on with the same clinical results, the PD for lung tumors has to be adjusted with AAA. According to our simulation we suggest to readjust the PD by 5% and an optimization for beam arrangements to better protect the organs at risks (OARs).

  12. Fiber-Coupled, Time-Gated { {Al}}_{2}{ {O}}_{3} : { {C}} Radioluminescence Dosimetry Technique and Algorithm for Radiation Therapy With LINACs

    NASA Astrophysics Data System (ADS)

    Magne, Sylvain; Deloule, Sybelle; Ostrowsky, Aimé; Ferdinand, Pierre

    2013-08-01

    An original algorithm for real-time In Vivo Dosimetry (IVD) based on Radioluminescence (RL) of dosimetric-grade Al2O3:C crystals is described and demonstrated in reference conditions with 12-MV photon beams from a Saturne 43 linear accelerator (LINAC), simulating External Beam Radiation Therapy (EBRT) treatments. During the course of irradiation, a portion of electrons is trapped within the Al2O3:C crystal while another portion recombines and generates RL, recorded on-line using an optical fiber. The RL sensitivity is dose-dependent and increases in accordance with the concentration of trapped electrons. Once irradiation is completed, the Al2O3:C crystal is reset by laser light (reusable) and the resultant OSL (Optically Stimulated Luminescence) is also collected back by the remote RL-OSL reader and finally integrated to yield the absorbed dose. During irradiation, scintillation and Cerenkov lights generated within the optical fiber (“stem effect”) are removed by a time-discrimination method involving a discriminating unit and a fiber-coupled BGO scintillator placed in the irradiation room, next to the LINAC. The RL signals were then calibrated with respect to reference dose and dose rate data using an ionization chamber (IC). The algorithm relies upon the integral of the RL and provides the accumulated dose (useful to the medical physicist) at any time during irradiation, the dose rate being derived afterwards. It is tested with both step and arbitrary dose rate profiles, manually operated from the LINAC control desk. The doses measured by RL and OSL are both compared to reference doses and deviations are about ±2% and ±1% respectively, thus demonstrating the reliability of the algorithm for arbitrary profiles and wide range of dose rates. Although the calculation was done off-line, it is amenable to real-time processing during irradiation.

  13. A Toolbox to Improve Algorithms for Insulin-Dosing Decision Support

    PubMed Central

    Donsa, K.; Plank, J.; Schaupp, L.; Mader, J. K.; Truskaller, T.; Tschapeller, B.; Höll, B.; Spat, S.; Pieber, T. R.

    2014-01-01

    Summary Background Standardized insulin order sets for subcutaneous basal-bolus insulin therapy are recommended by clinical guidelines for the inpatient management of diabetes. The algorithm based GlucoTab system electronically assists health care personnel by supporting clinical workflow and providing insulin-dose suggestions. Objective To develop a toolbox for improving clinical decision-support algorithms. Methods The toolbox has three main components. 1) Data preparation: Data from several heterogeneous sources is extracted, cleaned and stored in a uniform data format. 2) Simulation: The effects of algorithm modifications are estimated by simulating treatment workflows based on real data from clinical trials. 3) Analysis: Algorithm performance is measured, analyzed and simulated by using data from three clinical trials with a total of 166 patients. Results Use of the toolbox led to algorithm improvements as well as the detection of potential individualized subgroup-specific algorithms. Conclusion These results are a first step towards individualized algorithm modifications for specific patient subgroups. PMID:25024768

  14. Mining Top K Spread Sources for a Specific Topic and a Given Node.

    PubMed

    Liu, Weiwei; Deng, Zhi-Hong; Cao, Longbing; Xu, Xiaoran; Liu, He; Gong, Xiuwen

    2015-11-01

    In social networks, nodes (or users) interested in specific topics are often influenced by others. The influence is usually associated with a set of nodes rather than a single one. An interesting but challenging task for any given topic and node is to find the set of nodes that represents the source or trigger for the topic and thus identify those nodes that have the greatest influence on the given node as the topic spreads. We find that it is an NP-hard problem. This paper proposes an effective framework to deal with this problem. First, the topic propagation is represented as the Bayesian network. We then construct the propagation model by a variant of the voter model. The probability transition matrix (PTM) algorithm is presented to conduct the probability inference with the complexity O(θ(3)log2θ), while θ is the number nodes in the given graph. To evaluate the PTM algorithm, we conduct extensive experiments on real datasets. The experimental results show that the PTM algorithm is both effective and efficient.

  15. Optimal aperture synthesis radar imaging

    NASA Astrophysics Data System (ADS)

    Hysell, D. L.; Chau, J. L.

    2006-03-01

    Aperture synthesis radar imaging has been used to investigate coherent backscatter from ionospheric plasma irregularities at Jicamarca and elsewhere for several years. Phenomena of interest include equatorial spread F, 150-km echoes, the equatorial electrojet, range-spread meteor trails, and mesospheric echoes. The sought-after images are related to spaced-receiver data mathematically through an integral transform, but direct inversion is generally impractical or suboptimal. We instead turn to statistical inverse theory, endeavoring to utilize fully all available information in the data inversion. The imaging algorithm used at Jicamarca is based on an implementation of the MaxEnt method developed for radio astronomy. Its strategy is to limit the space of candidate images to those that are positive definite, consistent with data to the degree required by experimental confidence limits; smooth (in some sense); and most representative of the class of possible solutions. The algorithm was improved recently by (1) incorporating the antenna radiation pattern in the prior probability and (2) estimating and including the full error covariance matrix in the constraints. The revised algorithm is evaluated using new 28-baseline electrojet data from Jicamarca.

  16. Point spread functions and deconvolution of ultrasonic images.

    PubMed

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  17. The effects of pig manure application on the spread of tetracycline resistance in bulk and cucumber rhizosphere soils: a greenhouse experiment.

    PubMed

    Kang, Yijun; Hao, Yangyang; Xia, Dan; Shen, Min; Li, Qing; Hu, Jian

    2017-07-01

    It is important to understand the dynamics of tetracycline-resistant bacteria (TRB) and tetracycline resistance genes (TRGs) in bulk and rhizosphere soils for evaluating the spread of TRGs from pig manure to human. In this work, a greenhouse experiment was conducted to investigate the difference in abundance of TRB, tetracycline-resistant Escherichia coli (TRE), tetracycline-resistant Pseudomonas spp. (TRP), and TRGs between bulk and cucumber rhizosphere soils. The application of pig manure resulted in the long-term persistence of TRB, TRE, TRP, and TRGs in bulk soil and rhizosphere of cucumber for at least 65 days. Pig manure application dose was the major driving force in altering the abundances of TRB and TRE, whereas TRP was disturbed mainly by compartment (bulk soil or rhizosphere). Both TRE and the percentage of TRE in bulk and rhizosphere soils increased linearly with an increase in dose of pig manure. The exponential relationships between pig manure dose and TRP along with TRP percentage were also noted. There were significant differences in the relative abundances of TRGs between bulk and cucumber rhizosphere soils, suggesting the use of pig manure exerted a more lasting impact on the spread of TRGs in the rhizosphere than in the bulk soil.

  18. SU-F-SPS-06: Implementation of a Back-Projection Algorithm for 2D in Vivo Dosimetry with An EPID System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez Reyes, B; Rodriguez Perez, E; Sosa Aquino, M

    Purpose: To implement a back-projection algorithm for 2D dose reconstructions for in vivo dosimetry in radiation therapy using an Electronic Portal Imaging Device (EPID) based on amorphous silicon. Methods: An EPID system was used to calculate dose-response function, pixel sensitivity map, exponential scatter kernels and beam hardenig correction for the back-projection algorithm. All measurements were done with a 6 MV beam. A 2D dose reconstruction for an irradiated water phantom (30×30×30 cm{sup 3}) was done to verify the algorithm implementation. Gamma index evaluation between the 2D reconstructed dose and the calculated with a treatment planning system (TPS) was done. Results:more » A linear fit was found for the dose-response function. The pixel sensitivity map has a radial symmetry and was calculated with a profile of the pixel sensitivity variation. The parameters for the scatter kernels were determined only for a 6 MV beam. The primary dose was estimated applying the scatter kernel within EPID and scatter kernel within the patient. The beam hardening coefficient is σBH= 3.788×10{sup −4} cm{sup 2} and the effective linear attenuation coefficient is µAC= 0.06084 cm{sup −1}. The 95% of points evaluated had γ values not longer than the unity, with gamma criteria of ΔD = 3% and Δd = 3 mm, and within the 50% isodose surface. Conclusion: The use of EPID systems proved to be a fast tool for in vivo dosimetry, but the implementation is more complex that the elaborated for pre-treatment dose verification, therefore, a simplest method must be investigated. The accuracy of this method should be improved modifying the algorithm in order to compare lower isodose curves.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thiyagarajan, Rajesh; Vikraman, S; Karrthick, KP

    Purpose: To evaluate the impact of dose calculation algorithm on the dose distribution of biologically optimized Volumatric Modulated Arc Therapy (VMAT) plans for Esophgeal cancer. Methods: Eighteen retrospectively treated patients with carcinoma esophagus were studied. VMAT plans were optimized using biological objectives in Monaco (5.0) TPS for 6MV photon beam (Elekta Infinity). These plans were calculated for final dose using Monte Carlo (MC), Collapsed Cone Convolution (CCC) & Pencil Beam Convolution (PBC) algorithms from Monaco and Oncentra Masterplan TPS. A dose grid of 2mm was used for all algorithms and 1% per plan uncertainty maintained for MC calculation. MC basedmore » calculations were considered as the reference for CCC & PBC. Dose volume histogram (DVH) indices (D95, D98, D50 etc) of Target (PTV) and critical structures were compared to study the impact of all three algorithms. Results: Beam models were consistent with measured data. The mean difference observed in reference with MC calculation for D98, D95, D50 & D2 of PTV were 0.37%, −0.21%, 1.51% & 1.18% respectively for CCC and 3.28%, 2.75%, 3.61% & 3.08% for PBC. Heart D25 mean difference was 4.94% & 11.21% for CCC and PBC respectively. Lung Dmean mean difference was 1.5% (CCC) and 4.1% (PBC). Spinal cord D2 mean difference was 2.35% (CCC) and 3.98% (PBC). Similar differences were observed for liver and kidneys. The overall mean difference found for target and critical structures was 0.71±1.52%, 2.71±3.10% for CCC and 3.18±1.55%, 6.61±5.1% for PBC respectively. Conclusion: We observed a significant overestimate of dose distribution by CCC and PBC as compared to MC. The dose prediction of CCC is closer (<3%) to MC than that of PBC. This can be attributed to poor performance of CCC and PBC in inhomogeneous regions around esophagus. CCC can be considered as an alternate in the absence of MC algorithm.« less

  20. Comparison of three algorithms for initiation and titration of insulin glargine in insulin-naive patients with type 2 diabetes mellitus.

    PubMed

    Dailey, George; Aurand, Lisa; Stewart, John; Ameer, Barbara; Zhou, Rong

    2014-03-01

    Several titration algorithms can be used to adjust insulin dose and attain blood glucose targets. We compared clinical outcomes using three initiation and titration algorithms for insulin glargine in insulin-naive patients with type 2 diabetes mellitus (T2DM); focusing on those receiving both metformin and sulfonylurea (SU) at baseline. This was a pooled analysis of patient-level data from prospective, randomized, controlled 24-week trials. Patients received algorithm 1 (1 IU increase once daily, if fasting plasma glucose [FPG] > target), algorithm 2 (2 IU increase every 3 days, if FPG > target), or algorithm 3 (treat-to-target, generally 2-8 IU increase weekly based on 2-day mean FPG levels). Glycemic control, insulin dose, and hypoglycemic events were compared between algorithms. Overall, 1380 patients were included. In patients receiving metformin and SU at baseline, there were no significant differences in glycemic control between algorithms. Weight-adjusted dose was higher for algorithm 2 vs algorithms 1 and 3 (P = 0.0037 and P < 0.0001, respectively), though results were not significantly different when adjusted for reductions in HbA1c (0.36 IU/kg, 0.43 IU/kg, and 0.31 IU/kg for algorithms 1, 2, and 3, respectively). Yearly hypoglycemic event rates (confirmed blood glucose <56 mg/dL) were higher for algorithm 3 than algorithms 1 (P = 0.0003) and 2 (P < 0.0001). Three algorithms for initiation and titration of insulin glargine in patients with T2DM resulted in similar levels of glycemic control, with lower rates of hypoglycemia for patients treated using simpler algorithms 1 and 2. © 2013 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and Wiley Publishing Asia Pty Ltd.

  1. Spread Spectrum Signal Characteristic Estimation Using Exponential Averaging and an AD-HOC Chip rate Estimator

    DTIC Science & Technology

    2007-03-01

    Quadrature QPSK Quadrature Phase-Shift Keying RV Random Variable SHAC Single-Hop-Observation Auto- Correlation SINR Signal-to-Interference...The fast Fourier transform ( FFT ) accumulation method and the strip spectral correlation algorithm subdivide the support region in the bi-frequency...diamond shapes, while the strip spectral correlation algorithm subdivides the region into strips. Each strip covers a number of the FFT accumulation

  2. Iterative Neighbour-Information Gathering for Ranking Nodes in Complex Networks

    NASA Astrophysics Data System (ADS)

    Xu, Shuang; Wang, Pei; Lü, Jinhu

    2017-01-01

    Designing node influence ranking algorithms can provide insights into network dynamics, functions and structures. Increasingly evidences reveal that node’s spreading ability largely depends on its neighbours. We introduce an iterative neighbourinformation gathering (Ing) process with three parameters, including a transformation matrix, a priori information and an iteration time. The Ing process iteratively combines priori information from neighbours via the transformation matrix, and iteratively assigns an Ing score to each node to evaluate its influence. The algorithm appropriates for any types of networks, and includes some traditional centralities as special cases, such as degree, semi-local, LeaderRank. The Ing process converges in strongly connected networks with speed relying on the first two largest eigenvalues of the transformation matrix. Interestingly, the eigenvector centrality corresponds to a limit case of the algorithm. By comparing with eight renowned centralities, simulations of susceptible-infected-removed (SIR) model on real-world networks reveal that the Ing can offer more exact rankings, even without a priori information. We also observe that an optimal iteration time is always in existence to realize best characterizing of node influence. The proposed algorithms bridge the gaps among some existing measures, and may have potential applications in infectious disease control, designing of optimal information spreading strategies.

  3. A social activity and physical contact-based routing algorithm in mobile opportunistic networks for emergency response to sudden disasters

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoming; Lin, Yaguang; Zhang, Shanshan; Cai, Zhipeng

    2017-05-01

    Sudden disasters such as earthquake, flood and hurricane necessitate the employment of communication networks to carry out emergency response activities. Routing has a significant impact on the functionality, performance and flexibility of communication networks. In this article, the routing problem is studied considering the delivery ratio of messages, the overhead ratio of messages and the average delay of messages in mobile opportunistic networks (MONs) for enterprise-level emergency response communications in sudden disaster scenarios. Unlike the traditional routing methods for MONS, this article presents a new two-stage spreading and forwarding dynamic routing algorithm based on the proposed social activity degree and physical contact factor for mobile customers. A new modelling method for describing a dynamic evolving process of the topology structure of a MON is first proposed. Then a multi-copy spreading strategy based on the social activity degree of nodes and a single-copy forwarding strategy based on the physical contact factor between nodes are designed. Compared with the most relevant routing algorithms such as Epidemic, Prophet, Labelled-sim, Dlife-comm and Distribute-sim, the proposed routing algorithm can significantly increase the delivery ratio of messages, and decrease the overhead ratio and average delay of messages.

  4. Decorporation Approach after Rat Lung Contamination with Plutonium: Evaluation of the Key Parameters Influencing the Efficacy of a Protracted Chelation Treatment.

    PubMed

    Grémy, Olivier; Coudert, Sylvie; Renault, Daniel; Miccoli, Laurent

    2017-11-01

    While the efficacy of a protracted zinc (Zn)- or calcium (Ca)-diethylenetriaminepentaacetic acid (DTPA) treatment in reducing transuranic body burden has already been demonstrated, questions about therapeutic variables remain. In response to this, we designed animal experiments primarily to assess both the effect of fractionation of a given dose and the effect of the frequency of dose fraction, with the same total dose. In our study, rats were contaminated intravenously with plutonium (Pu) then treated several days later with Ca-DTPA given at once or in various split-dose regimens cumulating to the same total dose and spread over several days. Similar efficacies were induced by the injection of the total dose or by splitting the dose in several smaller doses, independent of the number of doses and the dose level per injection. In a second study, rats were pulmonary contaminated, and three weeks later they received a Ca-DTPA dose 11-fold higher than the maximal daily recommended dose, administered either as a single bolus or as numerous multiple injections cumulating to the same dose, based on different injection frequency schedules. Independent of frequency schedule, the various split-dose regimens spread over weeks/months were as efficient as single delivery of the total dose in mobilizing lung plutonium, and had a therapeutic advantage for removal of retained hepatic and bone plutonium burdens. We concluded that cumulative dose level was a therapeutic variable of greater importance than the distribution of split doses for the success of a repeated treatment regimen on retained tissue plutonium. In addition, pulmonary administration of clodronate, which aims at killing alveolar macrophages and subsequently releasing their plutonium content, and which is associated with a continuous Ca-DTPA infusion regimen, suggested that the efficacy of injected Ca-DTPA in decorporating lung deposit is limited, due to its restricted penetration into alveolar macrophages and not because plutonium, as a physicochemical form, is unavailable for chelation.

  5. Ridge filter design and optimization for the broad-beam three-dimensional irradiation system for heavy-ion radiotherapy.

    PubMed

    Schaffner, B; Kanai, T; Futami, Y; Shimbo, M; Urakabe, E

    2000-04-01

    The broad-beam three-dimensional irradiation system under development at National Institute of Radiological Sciences (NIRS) requires a small ridge filter to spread the initially monoenergetic heavy-ion beam to a small spread-out Bragg peak (SOBP). A large SOBP covering the target volume is then achieved by a superposition of differently weighted and displaced small SOBPs. Two approaches were studied for the definition of a suitable ridge filter and experimental verifications were performed. Both approaches show a good agreement between the calculated and measured dose and lead to a good homogeneity of the biological dose in the target. However, the ridge filter design that produces a Gaussian-shaped spectrum of the particle ranges was found to be more robust to small errors and uncertainties in the beam application. Furthermore, an optimization procedure for two fields was applied to compensate for the missing dose from the fragmentation tail for the case of a simple-geometry target. The optimized biological dose distributions show that a very good homogeneity is achievable in the target.

  6. Ultrasound assessment of cranial spread during caudal blockade in children: Effect of different volumes of local anesthetic

    PubMed Central

    Sinha, Chandni; Kumar, Amarjeet; Sharma, Shalini; Singh, Akhilesh Kumar; Majumdar, Somak; Kumar, Ajeet; Sahay, Nishant; Kumar, Bindey; Bhadani, UK

    2017-01-01

    Background: Ultrasound-guided caudal block injection is a simple, safe, and effective method of anesthesia/analgesia in pediatric patients. The volume of caudal drug required has always been a matter of debate. Materials and Methods: This present prospective, randomized, double-blinded study aimed to measure extent of the cranial spread of caudally administered levobupivacaine in Indian children by means of real-time ultrasonography. Ninety American Society of Anesthesiologists I/II children scheduled for urogenital surgeries were enrolled in this trial. Anesthesia and caudal analgesia were administered in a standardized manner in the patients. The patients received 0.5 ml/kg or 1 ml/kg or 1.25 ml/kg of 0.125% levobupivacaine according to the group allocated. Cranial spread of local anesthetic was noted using ultrasound. Results: There was no difference in the spread when related to age, sex, weight, or body mass index. A significant difference of ultrasound-assessed cranial spread of the local anesthetic was found between Group 1 (0.5 ml/kg) with both Group 2 (1 ml/kg) (P = 0.001) and with Group 3 (1.125 ml/kg) (P < 0.001) but there is no significant difference between Group 2 and Group 3 (P = 0.451) revealing that spinal level spread is only different between 0.5 ml/kg and 1 ml/kg of local anesthetic. Conclusion: In conclusion, the ultrasound assessment of local anesthetic spread after a caudal block showed that cranial spread of the block is dependent on the volume injected into the caudal space. Since there was no difference between 1 ml/kg and 1.25 ml/kg, to achieve a dermatomal blockade up to thoracic level, we might have to increase the dose beyond 1.25 ml/kg, keeping the toxic dose in mind. PMID:29033727

  7. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method.

    PubMed

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-21

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.

  8. [Clinical applications of dosing algorithm in the predication of warfarin maintenance dose].

    PubMed

    Huang, Sheng-wen; Xiang, Dao-kang; An, Bang-quan; Li, Gui-fang; Huang, Ling; Wu, Hai-li

    2011-12-27

    To evaluate the feasibility of clinical application for genetic based dosing algorithm in the predication of warfarin maintenance dose in Chinese population. The clinical data were collected and blood samples harvested from a total of 126 patients undergoing heart valve replacement. The genotypes of VKORC1 and CYP2C9 were determined by melting curve analysis after PCR. They were divided randomly into the study and control groups. In the study group, the first three doses of warfarin were prescribed according to the predicted warfarin maintenance dose while warfarin was initiated at 2.5 mg/d in the control group. The warfarin doses were adjusted according to the measured international normalized ratio (INR) values. And all subjects were followed for 50 days after an initiation of warfarin therapy. At the end of a 50-day follow-up period, the proportions of the patients on a stable dose were 82.4% (42/51) and 62.5% (30/48) for the study and control groups respectively. The mean durations of reaching a stable dose of warfarin were (27.5 ± 1.8) and (34.7 ± 1.8) days and the median durations were (24.0 ± 1.7) and (33.0 ± 4.5) days in the study and control groups respectively. Significant differences existed in the durations of reaching a stable dose between the two groups (P = 0.012). Compared with the control group, the hazard ratio (HR) for the duration of reaching a stable dose was 1.786 in the study group (95%CI 1.088 - 2.875, P = 0.026). The predicted dosing algorithm incorporating genetic and non-genetic factors may shorten the duration of achieving efficiently a stable dose of warfarin. And the present study validates the feasibility of its clinical application.

  9. The influence of different signal-to-background ratios on spatial resolution and F18-FDG-PET quantification using point spread function and time-of-flight reconstruction.

    PubMed

    Rogasch, Julian Mm; Hofheinz, Frank; Lougovski, Alexandr; Furth, Christian; Ruf, Juri; Großer, Oliver S; Mohnike, Konrad; Hass, Peter; Walke, Mathias; Amthauer, Holger; Steffen, Ingo G

    2014-12-01

    F18-fluorodeoxyglucose positron-emission tomography (FDG-PET) reconstruction algorithms can have substantial influence on quantitative image data used, e.g., for therapy planning or monitoring in oncology. We analyzed radial activity concentration profiles of differently reconstructed FDG-PET images to determine the influence of varying signal-to-background ratios (SBRs) on the respective spatial resolution, activity concentration distribution, and quantification (standardized uptake value [SUV], metabolic tumor volume [MTV]). Measurements were performed on a Siemens Biograph mCT 64 using a cylindrical phantom containing four spheres (diameter, 30 to 70 mm) filled with F18-FDG applying three SBRs (SBR1, 16:1; SBR2, 6:1; SBR3, 2:1). Images were reconstructed employing six algorithms (filtered backprojection [FBP], FBP + time-of-flight analysis [FBP + TOF], 3D-ordered subset expectation maximization [3D-OSEM], 3D-OSEM + TOF, point spread function [PSF], PSF + TOF). Spatial resolution was determined by fitting the convolution of the object geometry with a Gaussian point spread function to radial activity concentration profiles. MTV delineation was performed using fixed thresholds and semiautomatic background-adapted thresholding (ROVER, ABX, Radeberg, Germany). The pairwise Wilcoxon test revealed significantly higher spatial resolutions for PSF + TOF (up to 4.0 mm) compared to PSF, FBP, FBP + TOF, 3D-OSEM, and 3D-OSEM + TOF at all SBRs (each P < 0.05) with the highest differences for SBR1 decreasing to the lowest for SBR3. Edge elevations in radial activity profiles (Gibbs artifacts) were highest for PSF and PSF + TOF declining with decreasing SBR (PSF + TOF largest sphere; SBR1, 6.3%; SBR3, 2.7%). These artifacts induce substantial SUVmax overestimation compared to the reference SUV for PSF algorithms at SBR1 and SBR2 leading to substantial MTV underestimation in threshold-based segmentation. In contrast, both PSF algorithms provided the lowest deviation of SUVmean from reference SUV at SBR1 and SBR2. At high contrast, the PSF algorithms provided the highest spatial resolution and lowest SUVmean deviation from the reference SUV. In contrast, both algorithms showed the highest deviations in SUVmax and threshold-based MTV definition. At low contrast, all investigated reconstruction algorithms performed approximately equally. The use of PSF algorithms for quantitative PET data, e.g., for target volume definition or in serial PET studies, should be performed with caution - especially if comparing SUV of lesions with high and low contrasts.

  10. Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves.

    PubMed

    Yu, Hengyong; Ye, Yangbo; Zhao, Shiying; Wang, Ge

    2006-01-01

    We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction.

  11. Fully Convolutional Architecture for Low-Dose CT Image Noise Reduction

    NASA Astrophysics Data System (ADS)

    Badretale, S.; Shaker, F.; Babyn, P.; Alirezaie, J.

    2017-10-01

    One of the critical topics in medical low-dose Computed Tomography (CT) imaging is how best to maintain image quality. As the quality of images decreases with lowering the X-ray radiation dose, improving image quality is extremely important and challenging. We have proposed a novel approach to denoise low-dose CT images. Our algorithm learns directly from an end-to-end mapping from the low-dose Computed Tomography images for denoising the normal-dose CT images. Our method is based on a deep convolutional neural network with rectified linear units. By learning various low-level to high-level features from a low-dose image the proposed algorithm is capable of creating a high-quality denoised image. We demonstrate the superiority of our technique by comparing the results with two other state-of-the-art methods in terms of the peak signal to noise ratio, root mean square error, and a structural similarity index.

  12. Warfarin Pharmacogenetics

    PubMed Central

    Johnson, Julie A.; Cavallari, Larisa H.

    2014-01-01

    The cytochrome P450 (CYP) 2C9 and vitamin K epoxide reductase complex 1 (VKORC1) genotypes have been strongly and consistently associated with warfarin dose requirements, and dosing algorithms incorporating genetic and clinical information have been shown to be predictive of stable warfarin dose. However, clinical trials evaluating genotype-guided warfarin dosing produced mixed results, calling into question the utility of this approach. Recent trials used surrogate markers as endpoints rather than clinical endpoints, further complicating translation of the data to clinical practice. The present data do not support genetic testing to guide warfarin dosing, but in the setting where genotype data are available, use of such data in those of European ancestry is reasonable. Outcomes data are expected from an on-going trial, observational studies continue, and more work is needed to define dosing algorithms that incorporate appropriate variants in minority populations; all these will further shape guidelines and recommendations on the clinical utility of genotype-guided warfarin dosing. PMID:25282448

  13. Small field depth dose profile of 6 MV photon beam in a simple air-water heterogeneity combination: A comparison between anisotropic analytical algorithm dose estimation with thermoluminescent dosimeter dose measurement.

    PubMed

    Mandal, Abhijit; Ram, Chhape; Mourya, Ankur; Singh, Navin

    2017-01-01

    To establish trends of estimation error of dose calculation by anisotropic analytical algorithm (AAA) with respect to dose measured by thermoluminescent dosimeters (TLDs) in air-water heterogeneity for small field size photon. TLDs were irradiated along the central axis of the photon beam in four different solid water phantom geometries using three small field size single beams. The depth dose profiles were estimated using AAA calculation model for each field sizes. The estimated and measured depth dose profiles were compared. The over estimation (OE) within air cavity were dependent on field size (f) and distance (x) from solid water-air interface and formulated as OE = - (0.63 f + 9.40) x2+ (-2.73 f + 58.11) x + (0.06 f2 - 1.42 f + 15.67). In postcavity adjacent point and distal points from the interface have dependence on field size (f) and equations are OE = 0.42 f2 - 8.17 f + 71.63, OE = 0.84 f2 - 1.56 f + 17.57, respectively. The trend of estimation error of AAA dose calculation algorithm with respect to measured value have been formulated throughout the radiation path length along the central axis of 6 MV photon beam in air-water heterogeneity combination for small field size photon beam generated from a 6 MV linear accelerator.

  14. Deformable structure registration of bladder through surface mapping.

    PubMed

    Xiong, Li; Viswanathan, Akila; Stewart, Alexandra J; Haker, Steven; Tempany, Clare M; Chin, Lee M; Cormack, Robert A

    2006-06-01

    Cumulative dose distributions in fractionated radiation therapy depict the dose to normal tissues and therefore may permit an estimation of the risk of normal tissue complications. However, calculation of these distributions is highly challenging because of interfractional changes in the geometry of patient anatomy. This work presents an algorithm for deformable structure registration of the bladder and the verification of the accuracy of the algorithm using phantom and patient data. In this algorithm, the registration process involves conformal mapping of genus zero surfaces using finite element analysis, and guided by three control landmarks. The registration produces a correspondence between fractions of the triangular meshes used to describe the bladder surface. For validation of the algorithm, two types of balloons were inflated gradually to three times their original size, and several computerized tomography (CT) scans were taken during the process. The registration algorithm yielded a local accuracy of 4 mm along the balloon surface. The algorithm was then applied to CT data of patients receiving fractionated high-dose-rate brachytherapy to the vaginal cuff, with the vaginal cylinder in situ. The patients' bladder filling status was intentionally different for each fraction. The three required control landmark points were identified for the bladder based on anatomy. Out of an Institutional Review Board (IRB) approved study of 20 patients, 3 had radiographically identifiable points near the bladder surface that were used for verification of the accuracy of the registration. The verification point as seen in each fraction was compared with its predicted location based on affine as well as deformable registration. Despite the variation in bladder shape and volume, the deformable registration was accurate to 5 mm, consistently outperforming the affine registration. We conclude that the structure registration algorithm presented works with reasonable accuracy and provides a means of calculating cumulative dose distributions.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Xiong; Viswanathan, Akila; Stewart, Alexandra J.

    Cumulative dose distributions in fractionated radiation therapy depict the dose to normal tissues and therefore may permit an estimation of the risk of normal tissue complications. However, calculation of these distributions is highly challenging because of interfractional changes in the geometry of patient anatomy. This work presents an algorithm for deformable structure registration of the bladder and the verification of the accuracy of the algorithm using phantom and patient data. In this algorithm, the registration process involves conformal mapping of genus zero surfaces using finite element analysis, and guided by three control landmarks. The registration produces a correspondence between fractionsmore » of the triangular meshes used to describe the bladder surface. For validation of the algorithm, two types of balloons were inflated gradually to three times their original size, and several computerized tomography (CT) scans were taken during the process. The registration algorithm yielded a local accuracy of 4 mm along the balloon surface. The algorithm was then applied to CT data of patients receiving fractionated high-dose-rate brachytherapy to the vaginal cuff, with the vaginal cylinder in situ. The patients' bladder filling status was intentionally different for each fraction. The three required control landmark points were identified for the bladder based on anatomy. Out of an Institutional Review Board (IRB) approved study of 20 patients, 3 had radiographically identifiable points near the bladder surface that were used for verification of the accuracy of the registration. The verification point as seen in each fraction was compared with its predicted location based on affine as well as deformable registration. Despite the variation in bladder shape and volume, the deformable registration was accurate to 5 mm, consistently outperforming the affine registration. We conclude that the structure registration algorithm presented works with reasonable accuracy and provides a means of calculating cumulative dose distributions.« less

  16. SU-F-T-673: Effects of Cardiac Induced Brain Pulsations On Proton Minibeams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eagle, J; Marsh, S; Lee, E

    Purpose: To quantify the dosimetric impact of internal motion within the brain on spatially modulated proton minibeam radiation therapy (pMRT) for small animal research. Methods: The peak-to-valley dose ratio (PVDR) is an essential dosimetric factor for pMRT. Motion of an animal brain caused by cardiac-induced pulsations (CIP) can impact dose deposition. For synchrotron generated high dose rate X-ray microbeams this effect is evaded due to the quasi-instantaneous delivery. By comparison, pMRT potentially suffers increased spread due to lower dose rates. However, for a given dose rate it is less susceptible to beam spread than microbeams, due to the spatial modulationmore » being an order of magnitude larger. Monte Carlo simulations in TOPAS were used to model the beam spread for a 50.5MeV pMRT beam. Motion effects were simulated for a 50mm thick brass collimator with 0.3mm slit width and 1.0mm center-to-center spacing in a water phantom. The maximum motion in a rat brain due to CIP has been reported to be 0.06mm. Motion was simulated with a peak amplitude in the range 0–0.2mm. Results: The impact of 0.06mm peak motion was minimal and reduced the PVDR by about 1% at a depth of 10mm. For 0.2mm peak motion the PVDR was reduced by 16% at a depth of 10mm. Conclusion: For the pMRT beam the magnitude of cardiac-induced brain motion has minimal impact on the PVDR for the investigated collimator geometry. For more narrow beams the effect is likely to be larger. This indicates that delivery of pMRT to small animal brains should not be affected considerably by beamlines with linac compatible dose rates.« less

  17. Partitioning of the degradation space for OCR training

    NASA Astrophysics Data System (ADS)

    Barney Smith, Elisa H.; Andersen, Tim

    2006-01-01

    Generally speaking optical character recognition algorithms tend to perform better when presented with homogeneous data. This paper studies a method that is designed to increase the homogeneity of training data, based on an understanding of the types of degradations that occur during the printing and scanning process, and how these degradations affect the homogeneity of the data. While it has been shown that dividing the degradation space by edge spread improves recognition accuracy over dividing the degradation space by threshold or point spread function width alone, the challenge is in deciding how many partitions and at what value of edge spread the divisions should be made. Clustering of different types of character features, fonts, sizes, resolutions and noise levels shows that edge spread is indeed shown to be a strong indicator of the homogeneity of character data clusters.

  18. SU-F-J-148: A Collapsed Cone Algorithm Can Be Used for Quality Assurance for Monaco Treatment Plans for the MR-Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hackett, S; Asselen, B van; Wolthaus, J

    2016-06-15

    Purpose: Treatment plans for the MR-linac, calculated in Monaco v5.19, include direct simulation of the effects of the 1.5T B{sub 0}-field. We tested the feasibility of using a collapsed-cone (CC) algorithm in Oncentra, which does not account for effects of the B{sub 0}-field, as a fast online, independent 3D check of dose calculations. Methods: Treatment plans for six patients were generated in Monaco with a 6 MV FFF beam and the B{sub 0}-field. All plans were recalculated with a CC model of the same beam. Plans for the same patients were also generated in Monaco without the B{sub 0}-field. Themore » mean dose (Dmean) and doses to 10% (D10%) and 90% (D90%) of the volume were determined, as percentages of the prescribed dose, for target volumes and OARs in each calculated dose distribution. Student’s t-tests between paired parameters from Monaco plans and corresponding CC calculations were performed. Results: Figure 1 shows an example of the difference between dose distributions calculated in Monaco, with the B{sub 0}-field, and the CC algorithm. Figure 2 shows distributions of (absolute) difference between parameters for Monaco plans, with the B{sub 0}-field, and CC calculations. The Dmean and D90% values for the CTVs and PTVs were significantly different, but differences in dose distributions arose predominantly at the edges of the target volumes. Inclusion of the B{sub 0}-field had little effect on agreement of the Dmean values, as illustrated by Figure 3, nor on agreement of the D10% and D90% values. Conclusion: Dose distributions recalculated with a CC algorithm show good agreement with those calculated with Monaco, for plans both with and without the B{sub 0}-field, indicating that the CC algorithm could be used to check online treatment planning for the MRlinac. Agreement for a wider range of treatment sites, and the feasibility of using the γ-test as a simple pass/fail criterion, will be investigated.« less

  19. Algorithm of pulmonary emphysema extraction using low dose thoracic 3D CT images

    NASA Astrophysics Data System (ADS)

    Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Omatsu, H.; Tominaga, K.; Eguchi, K.; Moriyama, N.

    2006-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to 100 thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  20. Hop limited epidemic-like information spreading in mobile social networks with selfish nodes

    NASA Astrophysics Data System (ADS)

    Wu, Yahui; Deng, Su; Huang, Hongbin

    2013-07-01

    Similar to epidemics, information can be transmitted directly among users in mobile social networks. Different from epidemics, we can control the spreading process by adjusting the corresponding parameters (e.g., hop count) directly. This paper proposes a theoretical model to evaluate the performance of an epidemic-like spreading algorithm, in which the maximal hop count of the information is limited. In addition, our model can be used to evaluate the impact of users’ selfish behavior. Simulations show the accuracy of our theoretical model. Numerical results show that the information hop count can have an important impact. In addition, the impact of selfish behavior is related to the information hop count.

  1. DEVELOPMENT OF USER-FRIENDLY SIMULATION SYSTEM OF EARTHQUAKE INDUCED URBAN SPREADING FIRE

    NASA Astrophysics Data System (ADS)

    Tsujihara, Osamu; Gawa, Hidemi; Hayashi, Hirofumi

    In the simulation of earthquake induced urban spreading fire, the produce of the analytical model of the target area is required as well as the analysis of spreading fire and the presentati on of the results. In order to promote the use of the simulation, it is important that the simulation system is non-intrusive and the analysis results can be demonstrated by the realistic presentation. In this study, the simulation system is developed based on the Petri-net algorithm, in which the easy operation can be realized in the modeling of the target area of the simulation through the presentation of analytical results by realistic 3-D animation.

  2. Phase-Space Transport of Stochastic Chaos in Population Dynamics of Virus Spread

    NASA Astrophysics Data System (ADS)

    Billings, Lora; Bollt, Erik M.; Schwartz, Ira B.

    2002-06-01

    A general way to classify stochastic chaos is presented and applied to population dynamics models. A stochastic dynamical theory is used to develop an algorithmic tool to measure the transport across basin boundaries and predict the most probable regions of transport created by noise. The results of this tool are illustrated on a model of virus spread in a large population, where transport regions reveal how noise completes the necessary manifold intersections for the creation of emerging stochastic chaos.

  3. SU-F-T-273: Using a Diode Array to Explore the Weakness of TPS DoseCalculation Algorithm for VMAT and Sliding Window Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, J; Lu, B; Yan, G

    Purpose: To identify the weakness of dose calculation algorithm in a treatment planning system for volumetric modulated arc therapy (VMAT) and sliding window (SW) techniques using a two-dimensional diode array. Methods: The VMAT quality assurance(QA) was implemented with a diode array using multiple partial arcs that divided from a VMAT plan; each partial arc has the same segments and the original monitor units. Arc angles were less than ± 30°. Multiple arcs delivered through consecutive and repetitive gantry operating clockwise and counterclockwise. The source-toaxis distance setup with the effective depths of 10 and 20 cm were used for a diodemore » array. To figure out dose errors caused in delivery of VMAT fields, the numerous fields having the same segments with the VMAT field irradiated using different delivery techniques of static and step-and-shoot. The dose distributions of the SW technique were evaluated by creating split fields having fine moving steps of multi-leaf collimator leaves. Calculated doses using the adaptive convolution algorithm were analyzed with measured ones with distance-to-agreement and dose difference of 3 mm and 3%.. Results: While the beam delivery through static and step-and-shoot techniques showed the passing rate of 97 ± 2%, partial arc delivery of the VMAT fields brought out passing rate of 85%. However, when leaf motion was restricted less than 4.6 mm/°, passing rate was improved up to 95 ± 2%. Similar passing rate were obtained for both 10 and 20 cm effective depth setup. The calculated doses using the SW technique showed the dose difference over 7% at the final arrival point of moving leaves. Conclusion: Error components in dynamic delivery of modulated beams were distinguished by using the suggested QA method. This partial arc method can be used for routine VMAT QA. Improved SW calculation algorithm is required to provide accurate estimated doses.« less

  4. SU-F-T-74: Experimental Validation of Monaco Electron Monte Carlo Dose Calculation for Small Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varadhan; Way, S; Arentsen, L

    2016-06-15

    Purpose: To verify experimentally the accuracy of Monaco (Elekta) electron Monte Carlo (eMC) algorithm to calculate small field size depth doses, monitor units and isodose distributions. Methods: Beam modeling of eMC algorithm was performed for electron energies of 6, 9, 12 15 and 18 Mev for a Elekta Infinity Linac and all available ( 6, 10, 14 20 and 25 cone) applicator sizes. Electron cutouts of incrementally smaller field sizes (20, 40, 60 and 80% blocked from open cone) were fabricated. Dose calculation was performed using a grid size smaller than one-tenth of the R{sub 80–20} electron distal falloff distancemore » and number of particle histories was set at 500,000 per cm{sup 2}. Percent depth dose scans and beam profiles at dmax, d{sub 90} and d{sub 80} depths were measured for each cutout and energy with Wellhoffer (IBA) Blue Phantom{sup 2} scanning system and compared against eMC calculated doses. Results: The measured dose and output factors of incrementally reduced cutout sizes (to 3cm diameter) agreed with eMC calculated doses within ± 2.5%. The profile comparisons at dmax, d{sub 90} and d{sub 80} depths and percent depth doses at reduced field sizes agreed within 2.5% or 2mm. Conclusion: Our results indicate that the Monaco eMC algorithm can accurately predict depth doses, isodose distributions, and monitor units in homogeneous water phantom for field sizes as small as 3.0 cm diameter for energies in the 6 to 18 MeV range at 100 cm SSD. Consequently, the old rule of thumb to approximate limiting cutout size for an electron field determined by the lateral scatter equilibrium (E (MeV)/2.5 in centimeters of water) does not apply to Monaco eMC algorithm.« less

  5. TH-AB-202-09: Direct-Aperture Optimization for Combined MV+kV Dose Planning in Fluoroscopic Real-Time Tumor-Tracking Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, X; Belcher, AH; Grelewicz, Z

    Purpose: Real-time kV fluoroscopic tumor tracking has the benefit of direct tumor position monitoring. However, there is clinical concern over the excess kV imaging dose cost to the patient when imaging in continuous fluoroscopic mode. This work addresses this specific issue by proposing a combined MV+kV direct-aperture optimization (DAO) approach to integrate the kV imaging beam into a treatment planning such that the kV radiation is considered as a contributor to the overall dose delivery. Methods: The combined MV+kV DAO approach includes three algorithms. First, a projected Quasi-Newton algorithm (L-BFGS) is used to find optimized fluence with MV+kV dose formore » the best possible dose distribution. Then, Engel’s algorithm is applied to optimize the total number of monitor units and heuristically optimize the number of apertures. Finally, an aperture shape optimization (ASO) algorithm is applied to locally optimize the leaf positions of MLC. Results: Compared to conventional DAO MV plans with continuous kV fluoroscopic tracking, combined MV+kV DAO plan leads to a reduction in the total number of MV monitor units due to inclusion of kV dose as part of the PTV, and was also found to reduce the mean and maximum doses on the organs at risk (OAR). Compared to conventional DAO MV plan without kV tracking, the OAR dose in the combined MV+kV DAO plan was only slightly higher. DVH curves show that combined MV+kV DAO plan provided about the same PTV coverage as that in the conventional DAO plans without kV imaging. Conclusion: We report a combined MV+kV DAO approach that allows real time kV imager tumor tracking with only a trivial increasing on the OAR doses while providing the same coverage to PTV. The approach is suitable for clinic implementation.« less

  6. Phase-unwrapping algorithm by a rounding-least-squares approach

    NASA Astrophysics Data System (ADS)

    Juarez-Salazar, Rigoberto; Robledo-Sanchez, Carlos; Guerrero-Sanchez, Fermin

    2014-02-01

    A simple and efficient phase-unwrapping algorithm based on a rounding procedure and a global least-squares minimization is proposed. Instead of processing the gradient of the wrapped phase, this algorithm operates over the gradient of the phase jumps by a robust and noniterative scheme. Thus, the residue-spreading and over-smoothing effects are reduced. The algorithm's performance is compared with four well-known phase-unwrapping methods: minimum cost network flow (MCNF), fast Fourier transform (FFT), quality-guided, and branch-cut. A computer simulation and experimental results show that the proposed algorithm reaches a high-accuracy level than the MCNF method by a low-computing time similar to the FFT phase-unwrapping method. Moreover, since the proposed algorithm is simple, fast, and user-free, it could be used in metrological interferometric and fringe-projection automatic real-time applications.

  7. Adaptive optics image restoration algorithm based on wavefront reconstruction and adaptive total variation method

    NASA Astrophysics Data System (ADS)

    Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen

    2016-11-01

    To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.

  8. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures.

    PubMed

    Neylon, J; Sheng, K; Yu, V; Chen, Q; Low, D A; Kupelian, P; Santhanam, A

    2014-10-01

    Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.

  9. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neylon, J., E-mail: jneylon@mednet.ucla.edu; Sheng, K.; Yu, V.

    Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy intomore » a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. Results: The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. Conclusions: The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.« less

  10. SU-F-T-148: Are the Approximations in Analytic Semi-Empirical Dose Calculation Algorithms for Intensity Modulated Proton Therapy for Complex Heterogeneities of Head and Neck Clinically Significant?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yepes, P; UT MD Anderson Cancer Center, Houston, TX; Titt, U

    2016-06-15

    Purpose: Evaluate the differences in dose distributions between the proton analytic semi-empirical dose calculation algorithm used in the clinic and Monte Carlo calculations for a sample of 50 head-and-neck (H&N) patients and estimate the potential clinical significance of the differences. Methods: A cohort of 50 H&N patients, treated at the University of Texas Cancer Center with Intensity Modulated Proton Therapy (IMPT), were selected for evaluation of clinical significance of approximations in computed dose distributions. H&N site was selected because of the highly inhomogeneous nature of the anatomy. The Fast Dose Calculator (FDC), a fast track-repeating accelerated Monte Carlo algorithm formore » proton therapy, was utilized for the calculation of dose distributions delivered during treatment plans. Because of its short processing time, FDC allows for the processing of large cohorts of patients. FDC has been validated versus GEANT4, a full Monte Carlo system and measurements in water and for inhomogeneous phantoms. A gamma-index analysis, DVHs, EUDs, and TCP and NTCPs computed using published models were utilized to evaluate the differences between the Treatment Plan System (TPS) and FDC. Results: The Monte Carlo results systematically predict lower dose delivered in the target. The observed differences can be as large as 8 Gy, and should have a clinical impact. Gamma analysis also showed significant differences between both approaches, especially for the target volumes. Conclusion: Monte Carlo calculations with fast algorithms is practical and should be considered for the clinic, at least as a treatment plan verification tool.« less

  11. Short range spread-spectrum radiolocation system and method

    DOEpatents

    Smith, Stephen F.

    2003-04-29

    A short range radiolocation system and associated methods that allow the location of an item, such as equipment, containers, pallets, vehicles, or personnel, within a defined area. A small, battery powered, self-contained tag is provided to an item to be located. The tag includes a spread-spectrum transmitter that transmits a spread-spectrum code and identification information. A plurality of receivers positioned about the area receive signals from a transmitting tag. The position of the tag, and hence the item, is located by triangulation. The system employs three different ranging techniques for providing coarse, intermediate, and fine spatial position resolution. Coarse positioning information is provided by use of direct-sequence code phase transmitted as a spread-spectrum signal. Intermediate positioning information is provided by the use of a difference signal transmitted with the direct-sequence spread-spectrum code. Fine positioning information is provided by use of carrier phase measurements. An algorithm is employed to combine the three data sets to provide accurate location measurements.

  12. Optimization-based image reconstruction from sparse-view data in offset-detector CBCT

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y.; Shao, Lingxiong; Pan, Xiaochuan

    2013-01-01

    The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in a single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting the CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithms, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging.

  13. Objective performance assessment of five computed tomography iterative reconstruction algorithms.

    PubMed

    Omotayo, Azeez; Elbakri, Idris

    2016-11-22

    Iterative algorithms are gaining clinical acceptance in CT. We performed objective phantom-based image quality evaluation of five commercial iterative reconstruction algorithms available on four different multi-detector CT (MDCT) scanners at different dose levels as well as the conventional filtered back-projection (FBP) reconstruction. Using the Catphan500 phantom, we evaluated image noise, contrast-to-noise ratio (CNR), modulation transfer function (MTF) and noise-power spectrum (NPS). The algorithms were evaluated over a CTDIvol range of 0.75-18.7 mGy on four major MDCT scanners: GE DiscoveryCT750HD (algorithms: ASIR™ and VEO™); Siemens Somatom Definition AS+ (algorithm: SAFIRE™); Toshiba Aquilion64 (algorithm: AIDR3D™); and Philips Ingenuity iCT256 (algorithm: iDose4™). Images were reconstructed using FBP and the respective iterative algorithms on the four scanners. Use of iterative algorithms decreased image noise and increased CNR, relative to FBP. In the dose range of 1.3-1.5 mGy, noise reduction using iterative algorithms was in the range of 11%-51% on GE DiscoveryCT750HD, 10%-52% on Siemens Somatom Definition AS+, 49%-62% on Toshiba Aquilion64, and 13%-44% on Philips Ingenuity iCT256. The corresponding CNR increase was in the range 11%-105% on GE, 11%-106% on Siemens, 85%-145% on Toshiba and 13%-77% on Philips respectively. Most algorithms did not affect the MTF, except for VEO™ which produced an increase in the limiting resolution of up to 30%. A shift in the peak of the NPS curve towards lower frequencies and a decrease in NPS amplitude were obtained with all iterative algorithms. VEO™ required long reconstruction times, while all other algorithms produced reconstructions in real time. Compared to FBP, iterative algorithms reduced image noise and increased CNR. The iterative algorithms available on different scanners achieved different levels of noise reduction and CNR increase while spatial resolution improvements were obtained only with VEO™. This study is useful in that it provides performance assessment of the iterative algorithms available from several mainstream CT manufacturers.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokhrel, D; Badkul, R; Jiang, H

    Purpose: To compare dose distributions calculated using the iPlan XVMC algorithm and heterogeneities corrected/uncorrected Pencil Beam (PB-hete/PB-homo) algorithms for SBRT treatments of lung tumors. Methods: Ten patients with centrally located solitary lung tumors were treated using MC-based SBRT to 60Gy in 5 fractions for PTVV100%=95%. ITV was delineated on MIP-images based on 4D-CT scans. PTVs(ITV+5mm margins) ranged from 10.1–106.5cc(mean=48.6cc). MC-SBRT plans were generated with a combination of non-coplanar conformal arcs/beams using iPlan-XVMC-algorithm (BrainLABiPlan ver.4.1.2) for Novalis-TX consisting of HD-MLCs and 6MV-SRS(1000MU/min) mode, following RTOG 0813 dosimetric criteria. For comparison, PB-hete/PB-homo algorithms were used to re-calculate dose distributions using same beammore » configurations, MLCs/monitor units. Plans were evaluated with isocenter/maximal/mean doses to PTV. Normal lung doses were evaluated with V5/V10/V20 and mean-lung-dose(MLD), excluding PTV. Other OAR doses such as maximal spinal cord/2cc-esophagus/max bronchial tree (BT/maximal heart doses were tabulated. Results: Maximal/mean/isocenter doses to PTV calculated by PB-hete were uniformly larger than MC plans by a factors of 1.09/1.13/1.07, on average, whereas they were consistently lower by PB-homo by a factors of 0.9/0.84/0.9, respectively. The volume covered by 5Gy/10Gy/20Gy isodose-lines of the lung were comparable (average within±3%) when calculated by PB-hete compared to XVMC, but, consistently lower by PB-homo by a factors of 0.90/0.88/0.85, respectively. MLD was higher with PB-hete by 1.05, but, lower by PB-homo by 0.9, on average, compared to XVMC. XVMC max-cord/max-BT/max-heart and 2cc of esophagus doses were comparable to PB-hete; however, PB-homo underestimates by a factors of 0.82/0.89/0.88/0.86, on average, respectively. Conclusion: PB-hete significantly overestimates dose to PTV relative to XVMC -hence underdosing the target. MC is more complex and accurate with tissue-heterogeneities.The magnitude of variation significantly varies with ‘small-island-tumor’ surrounded by low-density lung tissues -PB algorithms lacks later electron scattering. Dose calculation with XVMC for lung SBRT is routinely performed in our clinic, its performance for head'neck/sinus cases will also be investigated.« less

  15. Dose Titration Algorithm Tuning (DTAT) should supersede ‘the’ Maximum Tolerated Dose (MTD) in oncology dose-finding trials

    PubMed Central

    Norris, David C.

    2017-01-01

    Background. Absent adaptive, individualized dose-finding in early-phase oncology trials, subsequent ‘confirmatory’ Phase III trials risk suboptimal dosing, with resulting loss of statistical power and reduced probability of technical success for the investigational therapy. While progress has been made toward explicitly adaptive dose-finding and quantitative modeling of dose-response relationships, most such work continues to be organized around a concept of ‘the’ maximum tolerated dose (MTD). The purpose of this paper is to demonstrate concretely how the aim of early-phase trials might be conceived, not as ‘dose-finding’, but as dose titration algorithm (DTA)-finding. Methods. A Phase I dosing study is simulated, for a notional cytotoxic chemotherapy drug, with neutropenia constituting the critical dose-limiting toxicity. The drug’s population pharmacokinetics and myelosuppression dynamics are simulated using published parameter estimates for docetaxel. The amenability of this model to linearization is explored empirically. The properties of a simple DTA targeting neutrophil nadir of 500 cells/mm 3 using a Newton-Raphson heuristic are explored through simulation in 25 simulated study subjects. Results. Individual-level myelosuppression dynamics in the simulation model approximately linearize under simple transformations of neutrophil concentration and drug dose. The simulated dose titration exhibits largely satisfactory convergence, with great variance in individualized optimal dosing. Some titration courses exhibit overshooting. Conclusions. The large inter-individual variability in simulated optimal dosing underscores the need to replace ‘the’ MTD with an individualized concept of MTD i . To illustrate this principle, the simplest possible DTA capable of realizing such a concept is demonstrated. Qualitative phenomena observed in this demonstration support discussion of the notion of tuning such algorithms. Although here illustrated specifically in relation to cytotoxic chemotherapy, the DTAT principle appears similarly applicable to Phase I studies of cancer immunotherapy and molecularly targeted agents. PMID:28663782

  16. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    NASA Astrophysics Data System (ADS)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  17. Optimal field-splitting algorithm in intensity-modulated radiotherapy: Evaluations using head-and-neck and female pelvic IMRT cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dou, Xin; Kim, Yusung, E-mail: yusung-kim@uiowa.edu; Bayouth, John E.

    2013-04-01

    To develop an optimal field-splitting algorithm of minimal complexity and verify the algorithm using head-and-neck (H and N) and female pelvic intensity-modulated radiotherapy (IMRT) cases. An optimal field-splitting algorithm was developed in which a large intensity map (IM) was split into multiple sub-IMs (≥2). The algorithm reduced the total complexity by minimizing the monitor units (MU) delivered and segment number of each sub-IM. The algorithm was verified through comparison studies with the algorithm as used in a commercial treatment planning system. Seven IMRT, H and N, and female pelvic cancer cases (54 IMs) were analyzed by MU, segment numbers, andmore » dose distributions. The optimal field-splitting algorithm was found to reduce both total MU and the total number of segments. We found on average a 7.9 ± 11.8% and 9.6 ± 18.2% reduction in MU and segment numbers for H and N IMRT cases with an 11.9 ± 17.4% and 11.1 ± 13.7% reduction for female pelvic cases. The overall percent (absolute) reduction in the numbers of MU and segments were found to be on average −9.7 ± 14.6% (−15 ± 25 MU) and −10.3 ± 16.3% (−3 ± 5), respectively. In addition, all dose distributions from the optimal field-splitting method showed improved dose distributions. The optimal field-splitting algorithm shows considerable improvements in both total MU and total segment number. The algorithm is expected to be beneficial for the radiotherapy treatment of large-field IMRT.« less

  18. SU-E-T-157: CARMEN: A MatLab-Based Research Platform for Monte Carlo Treatment Planning (MCTP) and Customized System for Planning Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baeza, J.A.; Ureba, A.; Jimenez-Ortega, E.

    Purpose: Although there exist several radiotherapy research platforms, such as: CERR, the most widely used and referenced; SlicerRT, which allows treatment plan comparison from various sources; and MMCTP, a full MCTP system; it is still needed a full MCTP toolset that provides users complete control of calculation grids, interpolation methods and filters in order to “fairly” compare results from different TPSs, supporting verification with experimental measurements. Methods: This work presents CARMEN, a MatLab-based platform including multicore and GPGPU accelerated functions for loading RT data; designing treatment plans; and evaluating dose matrices and experimental data.CARMEN supports anatomic and functional imaging inmore » DICOM format, as well as RTSTRUCT, RTPLAN and RTDOSE. Besides, it contains numerous tools to accomplish the MCTP process, managing egs4phant and phase space files.CARMEN planning mode assist in designing IMRT, VMAT and MERT treatments via both inverse and direct optimization. The evaluation mode contains a comprehensive toolset (e.g. 2D/3D gamma evaluation, difference matrices, profiles, DVH, etc.) to compare datasets from commercial TPS, MC simulations (i.e. 3ddose) and radiochromic film in a user-controlled manner. Results: CARMEN has been validated against commercial RTPs and well-established evaluation tools, showing coherent behavior of its multiple algorithms. Furthermore, CARMEN platform has been used to generate competitive complex treatment that has been published in comparative studies. Conclusion: A new research oriented MCTP platform with a customized validation toolset has been presented. Despite of being coded with a high-level programming language, CARMEN is agile due to the use of parallel algorithms. The wide-spread use of MatLab provides straightforward access to CARMEN’s algorithms to most researchers. Similarly, our platform can benefit from the MatLab community scientific developments as filters, registration algorithms etc. Finally, CARMEN arises the importance of grid and filtering control in treatment plan comparison.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Americo, Jeffrey L.; Sood, Cindy L.; Cotter, Catherine A.

    Classical inbred mice are extensively used for virus research. However, we recently found that some wild-derived inbred mouse strains are more susceptible than classical strains to monkeypox virus. Experiments described here indicated that the 50% lethal dose of vaccinia virus (VACV) and cowpox virus (CPXV) were two logs lower in wild-derived inbred CAST/Ei mice than classical inbred BALB/c mice, whereas there was little difference in the susceptibility of the mouse strains to herpes simplex virus. Live bioluminescence imaging was used to follow spread of pathogenic and attenuated VACV strains and CPXV virus from nasal passages to organs in the chestmore » and abdomen of CAST/Ei mice. Luminescence increased first in the head and then simultaneously in the chest and abdomen in a dose-dependent manner. The spreading kinetics was more rapid with VACV than CPXV although the peak photon flux was similar. These data suggest advantages of CAST/Ei mice for orthopoxvirus studies. - Highlights: • Wild-derived inbred CAST/Ei mice are susceptible to vaccinia virus and cowpox virus. • Morbidity and mortality from orthopoxviruses are greater in CAST/Ei than BALB/c mice. • Morbidity and mortality from herpes simplex virus type 1 are similar in both mice. • Imaging shows virus spread from nose to lungs, abdominal organs and brain. • Vaccinia virus spreads more rapidly than cowpox virus.« less

  20. Monte Carlo investigation of backscatter point spread function for x-ray imaging examinations

    NASA Astrophysics Data System (ADS)

    Xiong, Zhenyu; Vijayan, Sarath; Rudin, Stephen; Bednarek, Daniel R.

    2017-03-01

    X-ray imaging examinations, especially complex interventions, may result in relatively high doses to the patient's skin inducing skin injuries. A method was developed to determine the skin-dose distribution for non-uniform x-ray beams by convolving the backscatter point-spread-function (PSF) with the primary-dose distribution to generate the backscatter distribution that, when added to the primary dose, gives the total-dose distribution. This technique was incorporated in the dose-tracking system (DTS), which provides a real-time color-coded 3D-mapping of skin dose during fluoroscopic procedures. The aim of this work is to investigate the variation of the backscatter PSF with different parameters. A backscatter PSF of a 1-mm x-ray beam was generated by EGSnrc Monte-Carlo code for different x-ray beam energies, different soft-tissue thickness above bone, different bone thickness and different entrance-beam angles, as well as for different locations on the SK-150 anthropomorphic head phantom. The results show a reduction of the peak scatter to primary dose ratio of 48% when X-ray beam voltage is increased from 40 keV to 120 keV. The backscatter dose was reduced when bone was beneath the soft tissue layer and this reduction increased with thinner soft tissue and thicker bone layers. The backscatter factor increased about 21% as the angle of incidence of the beam with the entrance surface decreased from 90° (perpendicular) to 30°. The backscatter PSF differed for different locations on the SK-150 phantom by up to 15%. The results of this study can be used to improve the accuracy of dose calculation when using PSF convolution in the DTS.

  1. Development of a pharmacogenetic-guided warfarin dosing algorithm for Puerto Rican patients

    PubMed Central

    Ramos, Alga S; Seip, Richard L; Rivera-Miranda, Giselle; Felici-Giovanini, Marcos E; Garcia-Berdecia, Rafael; Alejandro-Cowan, Yirelia; Kocherla, Mohan; Cruz, Iadelisse; Feliu, Juan F; Cadilla, Carmen L; Renta, Jessica Y; Gorowski, Krystyna; Vergara, Cunegundo; Ruaño, Gualberto; Duconge, Jorge

    2012-01-01

    Aim This study was aimed at developing a pharmacogenetic-driven warfarin-dosing algorithm in 163 admixed Puerto Rican patients on stable warfarin therapy. Patients & methods A multiple linear-regression analysis was performed using log-transformed effective warfarin dose as the dependent variable, and combining CYP2C9 and VKORC1 genotyping with other relevant nongenetic clinical and demographic factors as independent predictors. Results The model explained more than two-thirds of the observed variance in the warfarin dose among Puerto Ricans, and also produced significantly better ‘ideal dose’ estimates than two pharmacogenetic models and clinical algorithms published previously, with the greatest benefit seen in patients ultimately requiring <7 mg/day. We also assessed the clinical validity of the model using an independent validation cohort of 55 Puerto Rican patients from Hartford, CT, USA (R2 = 51%). Conclusion Our findings provide the basis for planning prospective pharmacogenetic studies to demonstrate the clinical utility of genotyping warfarin-treated Puerto Rican patients. PMID:23215886

  2. Applications of nonlocal means algorithm in low-dose X-ray CT image processing and reconstruction: a review

    PubMed Central

    Zhang, Hao; Zeng, Dong; Zhang, Hua; Wang, Jing; Liang, Zhengrong

    2017-01-01

    Low-dose X-ray computed tomography (LDCT) imaging is highly recommended for use in the clinic because of growing concerns over excessive radiation exposure. However, the CT images reconstructed by the conventional filtered back-projection (FBP) method from low-dose acquisitions may be severely degraded with noise and streak artifacts due to excessive X-ray quantum noise, or with view-aliasing artifacts due to insufficient angular sampling. In 2005, the nonlocal means (NLM) algorithm was introduced as a non-iterative edge-preserving filter to denoise natural images corrupted by additive Gaussian noise, and showed superior performance. It has since been adapted and applied to many other image types and various inverse problems. This paper specifically reviews the applications of the NLM algorithm in LDCT image processing and reconstruction, and explicitly demonstrates its improving effects on the reconstructed CT image quality from low-dose acquisitions. The effectiveness of these applications on LDCT and their relative performance are described in detail. PMID:28303644

  3. Semi-blind sparse image reconstruction with application to MRFM.

    PubMed

    Park, Se Un; Dobigeon, Nicolas; Hero, Alfred O

    2012-09-01

    We propose a solution to the image deconvolution problem where the convolution kernel or point spread function (PSF) is assumed to be only partially known. Small perturbations generated from the model are exploited to produce a few principal components explaining the PSF uncertainty in a high-dimensional space. Unlike recent developments on blind deconvolution of natural images, we assume the image is sparse in the pixel basis, a natural sparsity arising in magnetic resonance force microscopy (MRFM). Our approach adopts a Bayesian Metropolis-within-Gibbs sampling framework. The performance of our Bayesian semi-blind algorithm for sparse images is superior to previously proposed semi-blind algorithms such as the alternating minimization algorithm and blind algorithms developed for natural images. We illustrate our myopic algorithm on real MRFM tobacco virus data.

  4. SU-E-J-109: Evaluation of Deformable Accumulated Parotid Doses Using Different Registration Algorithms in Adaptive Head and Neck Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, S; Chinese PLA General Hospital, Beijing, 100853 China; Liu, B

    2015-06-15

    Purpose: Three deformable image registration (DIR) algorithms are utilized to perform deformable dose accumulation for head and neck tomotherapy treatment, and the differences of the accumulated doses are evaluated. Methods: Daily MVCT data for 10 patients with pathologically proven nasopharyngeal cancers were analyzed. The data were acquired using tomotherapy (TomoTherapy, Accuray) at the PLA General Hospital. The prescription dose to the primary target was 70Gy in 33 fractions.Three DIR methods (B-spline, Diffeomorphic Demons and MIMvista) were used to propagate parotid structures from planning CTs to the daily CTs and accumulate fractionated dose on the planning CTs. The mean accumulated dosesmore » of parotids were quantitatively compared and the uncertainties of the propagated parotid contours were evaluated using Dice similarity index (DSI). Results: The planned mean dose of the ipsilateral parotids (32.42±3.13Gy) was slightly higher than those of the contralateral parotids (31.38±3.19Gy)in 10 patients. The difference between the accumulated mean doses of the ipsilateral parotids in the B-spline, Demons and MIMvista deformation algorithms (36.40±5.78Gy, 34.08±6.72Gy and 33.72±2.63Gy ) were statistically significant (B-spline vs Demons, P<0.0001, B-spline vs MIMvista, p =0.002). And The difference between those of the contralateral parotids in the B-spline, Demons and MIMvista deformation algorithms (34.08±4.82Gy, 32.42±4.80Gy and 33.92±4.65Gy ) were also significant (B-spline vs Demons, p =0.009, B-spline vs MIMvista, p =0.074). For the DSI analysis, the scores of B-spline, Demons and MIMvista DIRs were 0.90, 0.89 and 0.76. Conclusion: Shrinkage of parotid volumes results in the dose increase to the parotid glands in adaptive head and neck radiotherapy. The accumulated doses of parotids show significant difference using the different DIR algorithms between kVCT and MVCT. Therefore, the volume-based criterion (i.e. DSI) as a quantitative evaluation of registration accuracy is essential besides the visual assessment by the treating physician. This work was supported in part by the grant from Chinese Natural Science Foundation (Grant No. 11105225)« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moura, Eduardo S., E-mail: emoura@wisc.edu; Micka, John A.; Hammer, Cliff G.

    Purpose: This work presents the development of a phantom to verify the treatment planning system (TPS) algorithms used for high-dose-rate (HDR) brachytherapy. It is designed to measure the relative dose in a heterogeneous media. The experimental details used, simulation methods, and comparisons with a commercial TPS are also provided. Methods: To simulate heterogeneous conditions, four materials were used: Virtual Water™ (VM), BR50/50™, cork, and aluminum. The materials were arranged in 11 heterogeneity configurations. Three dosimeters were used to measure the relative response from a HDR {sup 192}Ir source: TLD-100™, Gafchromic{sup ®} EBT3 film, and an Exradin™ A1SL ionization chamber. Tomore » compare the results from the experimental measurements, the various configurations were modeled in the PENELOPE/penEasy Monte Carlo code. Images of each setup geometry were acquired from a CT scanner and imported into BrachyVision™ TPS software, which includes a grid-based Boltzmann solver Acuros™. The results of the measurements performed in the heterogeneous setups were normalized to the dose values measured in the homogeneous Virtual Water™ setup and the respective differences due to the heterogeneities were considered. Additionally, dose values calculated based on the American Association of Physicists in Medicine-Task Group 43 formalism were compared to dose values calculated with the Acuros™ algorithm in the phantom. Calculated doses were compared at the same points, where measurements have been performed. Results: Differences in the relative response as high as 11.5% were found from the homogeneous setup when the heterogeneous materials were inserted into the experimental phantom. The aluminum and cork materials produced larger differences than the plastic materials, with the BR50/50™ material producing results similar to the Virtual Water™ results. Our experimental methods agree with the PENELOPE/penEasy simulations for most setups and dosimeters. The TPS relative differences with the Acuros™ algorithm were similar in both experimental and simulated setups. The discrepancy between the BrachyVision™, Acuros™, and TG-43 dose responses in the phantom described by this work exceeded 12% for certain setups. Conclusions: The results derived from the phantom measurements show good agreement with the simulations and TPS calculations, using Acuros™ algorithm. Differences in the dose responses were evident in the experimental results when heterogeneous materials were introduced. These measurements prove the usefulness of the heterogeneous phantom for verification of HDR treatment planning systems based on model-based dose calculation algorithms.« less

  6. A Comparative Study of Different Deblurring Methods Using Filters

    NASA Astrophysics Data System (ADS)

    Srimani, P. K.; Kavitha, S.

    2011-12-01

    This paper attempts to undertake the study of Restored Gaussian Blurred Images by using four types of techniques of deblurring image viz., Wiener filter, Regularized filter, Lucy Richardson deconvolution algorithm and Blind deconvolution algorithm with an information of the Point Spread Function (PSF) corrupted blurred image. The same is applied to the scanned image of seven months baby in the womb and they are compared with one another, so as to choose the best technique for restored or deblurring image. This paper also attempts to undertake the study of restored blurred image using Regualr Filter(RF) with no information about the Point Spread Function (PSF) by using the same four techniques after executing the guess of the PSF. The number of iterations and the weight threshold of it to choose the best guesses for restored or deblurring image of these techniques are determined.

  7. Real-time topic-aware influence maximization using preprocessing.

    PubMed

    Chen, Wei; Lin, Tian; Yang, Cheng

    2016-01-01

    Influence maximization is the task of finding a set of seed nodes in a social network such that the influence spread of these seed nodes based on certain influence diffusion model is maximized. Topic-aware influence diffusion models have been recently proposed to address the issue that influence between a pair of users are often topic-dependent and information, ideas, innovations etc. being propagated in networks are typically mixtures of topics. In this paper, we focus on the topic-aware influence maximization task. In particular, we study preprocessing methods to avoid redoing influence maximization for each mixture from scratch. We explore two preprocessing algorithms with theoretical justifications. Our empirical results on data obtained in a couple of existing studies demonstrate that one of our algorithms stands out as a strong candidate providing microsecond online response time and competitive influence spread, with reasonable preprocessing effort.

  8. Accurate simulation of backscattering spectra in the presence of sharp resonances

    NASA Astrophysics Data System (ADS)

    Barradas, N. P.; Alves, E.; Jeynes, C.; Tosaki, M.

    2006-06-01

    In elastic backscattering spectrometry, the shape of the observed spectrum due to resonances in the nuclear scattering cross-section is influenced by many factors. If the energy spread of the beam before interaction is larger than the resonance width, then a simple convolution with the energy spread on exit and with the detection system resolution will lead to a calculated spectrum with a resonance much sharper than the observed signal. Also, the yield from a thin layer will not be calculated accurately. We have developed an algorithm for the accurate simulation of backscattering spectra in the presence of sharp resonances. Albeit approximate, the algorithm leads to dramatic improvements in the quality and accuracy of the simulations. It is simple to implement and leads to only small increases of the calculation time, being thus suitable for routine data analysis. We show different experimental examples, including samples with roughness and porosity.

  9. SU-D-202-04: Validation of Deformable Image Registration Algorithms for Head and Neck Adaptive Radiotherapy in Routine Clinical Setting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, L; Pi, Y; Chen, Z

    2016-06-15

    Purpose: To evaluate the ROI contours and accumulated dose difference using different deformable image registration (DIR) algorithms for head and neck (H&N) adaptive radiotherapy. Methods: Eight H&N cancer patients were randomly selected from the affiliated hospital. During the treatment, patients were rescanned every week with ROIs well delineated by radiation oncologist on each weekly CT. New weekly treatment plans were also re-designed with consistent dose prescription on the rescanned CT and executed for one week on Siemens CT-on-rails accelerator. At the end, we got six weekly CT scans from CT1 to CT6 including six weekly treatment plans for each patient.more » The primary CT1 was set as the reference CT for DIR proceeding with the left five weekly CTs using ANACONDA and MORFEUS algorithms separately in RayStation and the external skin ROI was set to be the controlling ROI both. The entire calculated weekly dose were deformed and accumulated on corresponding reference CT1 according to the deformation vector field (DVFs) generated by the two different DIR algorithms respectively. Thus we got both the ANACONDA-based and MORFEUS-based accumulated total dose on CT1 for each patient. At the same time, we mapped the ROIs on CT1 to generate the corresponding ROIs on CT6 using ANACONDA and MORFEUS DIR algorithms. DICE coefficients between the DIR deformed and radiation oncologist delineated ROIs on CT6 were calculated. Results: For DIR accumulated dose, PTV D95 and Left-Eyeball Dmax show significant differences with 67.13 cGy and 109.29 cGy respectively (Table1). For DIR mapped ROIs, PTV, Spinal cord and Left-Optic nerve show difference with −0.025, −0.127 and −0.124 (Table2). Conclusion: Even two excellent DIR algorithms can give divergent results for ROI deformation and dose accumulation. As more and more TPS get DIR module integrated, there is an urgent need to realize the potential risk using DIR in clinical.« less

  10. Report of the AAPM Task Group No. 105: Issues associated with clinical implementation of Monte Carlo-based photon and electron external beam treatment planning.

    PubMed

    Chetty, Indrin J; Curran, Bruce; Cygler, Joanna E; DeMarco, John J; Ezzell, Gary; Faddegon, Bruce A; Kawrakow, Iwan; Keall, Paul J; Liu, Helen; Ma, C M Charlie; Rogers, D W O; Seuntjens, Jan; Sheikh-Bagheri, Daryoush; Siebers, Jeffrey V

    2007-12-01

    The Monte Carlo (MC) method has been shown through many research studies to calculate accurate dose distributions for clinical radiotherapy, particularly in heterogeneous patient tissues where the effects of electron transport cannot be accurately handled with conventional, deterministic dose algorithms. Despite its proven accuracy and the potential for improved dose distributions to influence treatment outcomes, the long calculation times previously associated with MC simulation rendered this method impractical for routine clinical treatment planning. However, the development of faster codes optimized for radiotherapy calculations and improvements in computer processor technology have substantially reduced calculation times to, in some instances, within minutes on a single processor. These advances have motivated several major treatment planning system vendors to embark upon the path of MC techniques. Several commercial vendors have already released or are currently in the process of releasing MC algorithms for photon and/or electron beam treatment planning. Consequently, the accessibility and use of MC treatment planning algorithms may well become widespread in the radiotherapy community. With MC simulation, dose is computed stochastically using first principles; this method is therefore quite different from conventional dose algorithms. Issues such as statistical uncertainties, the use of variance reduction techniques, the ability to account for geometric details in the accelerator treatment head simulation, and other features, are all unique components of a MC treatment planning algorithm. Successful implementation by the clinical physicist of such a system will require an understanding of the basic principles of MC techniques. The purpose of this report, while providing education and review on the use of MC simulation in radiotherapy planning, is to set out, for both users and developers, the salient issues associated with clinical implementation and experimental verification of MC dose algorithms. As the MC method is an emerging technology, this report is not meant to be prescriptive. Rather, it is intended as a preliminary report to review the tenets of the MC method and to provide the framework upon which to build a comprehensive program for commissioning and routine quality assurance of MC-based treatment planning systems.

  11. Investigation of effective decision criteria for multiobjective optimization in IMRT.

    PubMed

    Holdsworth, Clay; Stewart, Robert D; Kim, Minsun; Liao, Jay; Phillips, Mark H

    2011-06-01

    To investigate how using different sets of decision criteria impacts the quality of intensity modulated radiation therapy (IMRT) plans obtained by multiobjective optimization. A multiobjective optimization evolutionary algorithm (MOEA) was used to produce sets of IMRT plans. The MOEA consisted of two interacting algorithms: (i) a deterministic inverse planning optimization of beamlet intensities that minimizes a weighted sum of quadratic penalty objectives to generate IMRT plans and (ii) an evolutionary algorithm that selects the superior IMRT plans using decision criteria and uses those plans to determine the new weights and penalty objectives of each new plan. Plans resulting from the deterministic algorithm were evaluated by the evolutionary algorithm using a set of decision criteria for both targets and organs at risk (OARs). Decision criteria used included variation in the target dose distribution, mean dose, maximum dose, generalized equivalent uniform dose (gEUD), an equivalent uniform dose (EUD(alpha,beta) formula derived from the linear-quadratic survival model, and points on dose volume histograms (DVHs). In order to quantatively compare results from trials using different decision criteria, a neutral set of comparison metrics was used. For each set of decision criteria investigated, IMRT plans were calculated for four different cases: two simple prostate cases, one complex prostate Case, and one complex head and neck Case. When smaller numbers of decision criteria, more descriptive decision criteria, or less anti-correlated decision criteria were used to characterize plan quality during multiobjective optimization, dose to OARs and target dose variation were reduced in the final population of plans. Mean OAR dose and gEUD (a = 4) decision criteria were comparable. Using maximum dose decision criteria for OARs near targets resulted in inferior populations that focused solely on low target variance at the expense of high OAR dose. Target dose range, (D(max) - D(min)), decision criteria were found to be most effective for keeping targets uniform. Using target gEUD decision criteria resulted in much lower OAR doses but much higher target dose variation. EUD(alpha,beta) based decision criteria focused on a region of plan space that was a compromise between target and OAR objectives. None of these target decision criteria dominated plans using other criteria, but only focused on approaching a different area of the Pareto front. The choice of decision criteria implemented in the MOEA had a significant impact on the region explored and the rate of convergence toward the Pareto front. When more decision criteria, anticorrelated decision criteria, or decision criteria with insufficient information were implemented, inferior populations are resulted. When more informative decision criteria were used, such as gEUD, EUD(alpha,beta), target dose range, and mean dose, MOEA optimizations focused on approaching different regions of the Pareto front, but did not dominate each other. Using simple OAR decision criteria and target EUD(alpha,beta) decision criteria demonstrated the potential to generate IMRT plans that significantly reduce dose to OARs while achieving the same or better tumor control when clinical requirements on target dose variance can be met or relaxed.

  12. A spatially encoded dose difference maximal intensity projection map for patient dose evaluation: A new first line patient quality assurance tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu Weigang; Graff, Pierre; Boettger, Thomas

    2011-04-15

    Purpose: To develop a spatially encoded dose difference maximal intensity projection (DD-MIP) as an online patient dose evaluation tool for visualizing the dose differences between the planning dose and dose on the treatment day. Methods: Megavoltage cone-beam CT (MVCBCT) images acquired on the treatment day are used for generating the dose difference index. Each index is represented by different colors for underdose, acceptable, and overdose regions. A maximal intensity projection (MIP) algorithm is developed to compress all the information of an arbitrary 3D dose difference index into a 2D DD-MIP image. In such an algorithm, a distance transformation is generatedmore » based on the planning CT. Then, two new volumes representing the overdose and underdose regions of the dose difference index are encoded with the distance transformation map. The distance-encoded indices of each volume are normalized using the skin distance obtained on the planning CT. After that, two MIPs are generated based on the underdose and overdose volumes with green-to-blue and green-to-red lookup tables, respectively. Finally, the two MIPs are merged with an appropriate transparency level and rendered in planning CT images. Results: The spatially encoded DD-MIP was implemented in a dose-guided radiotherapy prototype and tested on 33 MVCBCT images from six patients. The user can easily establish the threshold for the overdose and underdose. A 3% difference between the treatment and planning dose was used as the threshold in the study; hence, the DD-MIP shows red or blue color for the dose difference >3% or {<=}3%, respectively. With such a method, the overdose and underdose regions can be visualized and distinguished without being overshadowed by superficial dose differences. Conclusions: A DD-MIP algorithm was developed that compresses information from 3D into a single or two orthogonal projections while hinting the user whether the dose difference is on the skin surface or deeper.« less

  13. Ketamine modulation of the haemodynamic response to spreading depolarization in the gyrencephalic swine brain

    PubMed Central

    Santos, Edgar; Schöll, Michael; Kunzmann, Kevin; Stock, Christian; Silos, Humberto; Unterberg, Andreas W; Sakowitz, Oliver W

    2016-01-01

    Spreading depolarization (SD) generates significant alterations in cerebral haemodynamics, which can have detrimental consequences on brain function and integrity. Ketamine has shown an important capacity to modulate SD; however, its impact on SD haemodynamic response is incompletely understood. We investigated the effect of two therapeutic ketamine dosages, a low-dose of 2 mg/kg/h and a high-dose of 4 mg/kg/h, on the haemodynamic response to SD in the gyrencephalic swine brain. Cerebral blood volume, pial arterial diameter and cerebral blood flow were assessed through intrinsic optical signal imaging and laser-Doppler flowmetry. Our findings indicate that frequent SDs caused a persistent increase in the baseline pial arterial diameter, which can lead to a diminished capacity to further dilate. Ketamine infused at a low-dose reduced the hyperemic/vasodilative response to SD; however, it did not alter the subsequent oligemic/vasoconstrictive response. This low-dose did not prevent the baseline diameter increase and the diminished dilative capacity. Only infusion of ketamine at a high-dose suppressed SD and the coupled haemodynamic response. Therefore, the haemodynamic response to SD can be modulated by continuous infusion of ketamine. However, its use in pathological models needs to be explored to corroborate its possible clinical benefit. PMID:27126324

  14. Ketamine modulation of the haemodynamic response to spreading depolarization in the gyrencephalic swine brain.

    PubMed

    Sánchez-Porras, Renán; Santos, Edgar; Schöll, Michael; Kunzmann, Kevin; Stock, Christian; Silos, Humberto; Unterberg, Andreas W; Sakowitz, Oliver W

    2017-05-01

    Spreading depolarization (SD) generates significant alterations in cerebral haemodynamics, which can have detrimental consequences on brain function and integrity. Ketamine has shown an important capacity to modulate SD; however, its impact on SD haemodynamic response is incompletely understood. We investigated the effect of two therapeutic ketamine dosages, a low-dose of 2 mg/kg/h and a high-dose of 4 mg/kg/h, on the haemodynamic response to SD in the gyrencephalic swine brain. Cerebral blood volume, pial arterial diameter and cerebral blood flow were assessed through intrinsic optical signal imaging and laser-Doppler flowmetry. Our findings indicate that frequent SDs caused a persistent increase in the baseline pial arterial diameter, which can lead to a diminished capacity to further dilate. Ketamine infused at a low-dose reduced the hyperemic/vasodilative response to SD; however, it did not alter the subsequent oligemic/vasoconstrictive response. This low-dose did not prevent the baseline diameter increase and the diminished dilative capacity. Only infusion of ketamine at a high-dose suppressed SD and the coupled haemodynamic response. Therefore, the haemodynamic response to SD can be modulated by continuous infusion of ketamine. However, its use in pathological models needs to be explored to corroborate its possible clinical benefit.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stathakis, S; Defoor, D; Saenz, D

    Purpose: Stereotactic radiosurgery (SRS) outcomes are related to the delivered dose to the target and to surrounding tissue. We have commissioned a Monte Carlo based dose calculation algorithm to recalculated the delivered dose planned using pencil beam calculation dose engine. Methods: Twenty consecutive previously treated patients have been selected for this study. All plans were generated using the iPlan treatment planning system (TPS) and calculated using the pencil beam algorithm. Each patient plan consisted of 1 to 3 targets and treated using dynamically conformal arcs or intensity modulated beams. Multi-target treatments were delivered using multiple isocenters, one for each target.more » These plans were recalculated for the purpose of this study using a single isocenter. The CT image sets along with the plan, doses and structures were DICOM exported to Monaco TPS and the dose was recalculated using the same voxel resolution and monitor units. Benchmark data was also generated prior to patient calculations to assess the accuracy of the two TPS against measurements using a micro ionization chamber in solid water. Results: Good agreement, within −0.4% for Monaco and +2.2% for iPlan were observed for measurements in water phantom. Doses in patient geometry revealed up to 9.6% differences for single target plans and 9.3% for multiple-target-multiple-isocenter plans. The average dose differences for multi-target-single-isocenter plans were approximately 1.4%. Similar differences were observed for the OARs and integral dose. Conclusion: Accuracy of the beam is crucial for the dose calculation especially in the case of small fields such as those used in SRS treatments. A superior dose calculation algorithm such as Monte Carlo, with properly commissioned beam models, which is unaffected by the lack of electronic equilibrium should be preferred for the calculation of small fields to improve accuracy.« less

  16. Minimum anesthetic volume in regional anesthesia by using ultrasound-guidance.

    PubMed

    Di Filippo, Alessandro; Falsini, Silvia; Adembri, Chiara

    2016-01-01

    The ultrasound guidance in regional anesthesia ensures the visualization of needle placement and the spread of Local Anesthetics. Over the past few years there was a substantial interest in determining the Minimum Effective Anesthetic Volume necessary to accomplish surgical anesthesia. The precise and real-time visualization of Local Anesthetics spread under ultrasound guidance block may represent the best requisite for reducing Local Anesthetics dose and Local Anesthetics-related effects. We will report a series of studies that have demonstrated the efficacy of ultrasound guidance blocks to reduce Local Anesthetics and obtain surgical anesthesia as compared to block performed under blind or electrical nerve stimulation technique. Unfortunately, the results of studies are widely divergent and not seem to indicate a dose considered effective, for each block, in a definitive way; but it is true that, through the use of ultrasound guidance, it is possible to reduce the dose of anesthetic in the performance of anesthetic blocks. Copyright © 2014 Sociedade Brasileira de Anestesiologia. Published by Elsevier Editora Ltda. All rights reserved.

  17. MO-FG-CAMPUS-TeP1-05: Rapid and Efficient 3D Dosimetry for End-To-End Patient-Specific QA of Rotational SBRT Deliveries Using a High-Resolution EPID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Han, B; Xing, L

    2016-06-15

    Purpose: EPID-based patient-specific quality assurance provides verification of the planning setup and delivery process that phantomless QA and log-file based virtual dosimetry methods cannot achieve. We present a method for EPID-based QA utilizing spatially-variant EPID response kernels that allows for direct calculation of the entrance fluence and 3D phantom dose. Methods: An EPID dosimetry system was utilized for 3D dose reconstruction in a cylindrical phantom for the purposes of end-to-end QA. Monte Carlo (MC) methods were used to generate pixel-specific point-spread functions (PSFs) characterizing the spatially non-uniform EPID portal response in the presence of phantom scatter. The spatially-variant PSFs weremore » decomposed into spatially-invariant basis PSFs with the symmetric central-axis kernel as the primary basis kernel and off-axis representing orthogonal perturbations in pixel-space. This compact and accurate characterization enables the use of a modified Richardson-Lucy deconvolution algorithm to directly reconstruct entrance fluence from EPID images without iterative scatter subtraction. High-resolution phantom dose kernels were cogenerated in MC with the PSFs enabling direct recalculation of the resulting phantom dose by rapid forward convolution once the entrance fluence was calculated. A Delta4 QA phantom was used to validate the dose reconstructed in this approach. Results: The spatially-invariant representation of the EPID response accurately reproduced the entrance fluence with >99.5% fidelity with a simultaneous reduction of >60% in computational overhead. 3D dose for 10{sub 6} voxels was reconstructed for the entire phantom geometry. A 3D global gamma analysis demonstrated a >95% pass rate at 3%/3mm. Conclusion: Our approach demonstrates the capabilities of an EPID-based end-to-end QA methodology that is more efficient than traditional EPID dosimetry methods. Displacing the point of measurement external to the QA phantom reduces the necessary complexity of the phantom itself while offering a method that is highly scalable and inherently generalizable to rotational and trajectory based deliveries. This research was partially supported by Varian.« less

  18. SU-F-BRD-08: A Novel Technique to Derive a Clinically-Acceptable Beam Model for Proton Pencil-Beam Scanning in a Commercial Treatment Planning System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholey, J. E.; Lin, L.; Ainsley, C. G.

    2015-06-15

    Purpose: To evaluate the accuracy and limitations of a commercially-available treatment planning system’s (TPS’s) dose calculation algorithm for proton pencil-beam scanning (PBS) and present a novel technique to efficiently derive a clinically-acceptable beam model. Methods: In-air fluence profiles of PBS spots were modeled in the TPS alternately as single-(SG) and double-Gaussian (DG) functions, based on fits to commissioning data. Uniform-fluence, single-energy-layer square fields of various sizes and energies were calculated with both beam models and delivered to water. Dose was measured at several depths. Motivated by observed discrepancies in measured-versus-calculated dose comparisons, a third model was constructed based on double-Gaussianmore » parameters contrived through a novel technique developed to minimize these differences (DGC). Eleven cuboid-dose-distribution-shaped fields with varying range/modulation and field size were subsequently generated in the TPS, using each of the three beam models described, and delivered to water. Dose was measured at the middle of each spread-out Bragg peak. Results: For energies <160 MeV, the DG model fit square-field measurements to <2% at all depths, while the SG model could disagree by >6%. For energies >160 MeV, both SG and DG models fit square-field measurements to <1% at <4 cm depth, but could exceed 6% deeper. By comparison, disagreement with the DGC model was always <3%. For the cuboid plans, calculation-versus-measured percent dose differences exceeded 7% for the SG model, being larger for smaller fields. The DG model showed <3% disagreement for all field sizes in shorter-range beams, although >5% differences for smaller fields persisted in longer-range beams. In contrast, the DGC model predicted measurements to <2% for all beams. Conclusion: Neither the TPS’s SG nor DG models, employed as intended, are ideally suited for routine clinical use. However, via a novel technique to be presented, its DG model can be tuned judiciously to yield acceptable results.« less

  19. An efficient finite element method for simulation of droplet spreading on a topologically rough surface

    NASA Astrophysics Data System (ADS)

    Luo, Li; Wang, Xiao-Ping; Cai, Xiao-Chuan

    2017-11-01

    We study numerically the dynamics of a three-dimensional droplet spreading on a rough solid surface using a phase-field model consisting of the coupled Cahn-Hilliard and Navier-Stokes equations with a generalized Navier boundary condition (GNBC). An efficient finite element method on unstructured meshes is introduced to cope with the complex geometry of the solid surfaces. We extend the GNBC to surfaces with complex geometry by including its weak form along different normal and tangential directions in the finite element formulation. The semi-implicit time discretization scheme results in a decoupled system for the phase function, the velocity, and the pressure. In addition, a mass compensation algorithm is introduced to preserve the mass of the droplet. To efficiently solve the decoupled systems, we present a highly parallel solution strategy based on domain decomposition techniques. We validate the newly developed solution method through extensive numerical experiments, particularly for those phenomena that can not be achieved by two-dimensional simulations. On a surface with circular posts, we study how wettability of the rough surface depends on the geometry of the posts. The contact line motion for a droplet spreading over some periodic rough surfaces are also efficiently computed. Moreover, we study the spreading process of an impacting droplet on a microstructured surface, a qualitative agreement is achieved between the numerical and experimental results. The parallel performance suggests that the proposed solution algorithm is scalable with over 4,000 processors cores with tens of millions of unknowns.

  20. Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves

    PubMed Central

    Ye, Yangbo; Zhao, Shiying; Wang, Ge

    2006-01-01

    We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction. PMID:23165018

  1. MO-PIS-Exhibit Hall-01: Imaging: CT Dose Optimization Technologies I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denison, K; Smith, S

    Partners in Solutions is an exciting new program in which AAPM partners with our vendors to present practical “hands-on” information about the equipment and software systems that we use in our clinics. The imaging topic this year is CT scanner dose optimization capabilities. Note that the sessions are being held in a special purpose room built on the Exhibit Hall Floor, to encourage further interaction with the vendors. Dose Optimization Capabilities of GE Computed Tomography Scanners Presentation Time: 11:15 – 11:45 AM GE Healthcare is dedicated to the delivery of high quality clinical images through the development of technologies, whichmore » optimize the application of ionizing radiation. In computed tomography, dose management solutions fall into four categories: employs projection data and statistical modeling to decrease noise in the reconstructed image - creating an opportunity for mA reduction in the acquisition of diagnostic images. Veo represents true Model Based Iterative Reconstruction (MBiR). Using high-level algorithms in tandem with advanced computing power, Veo enables lower pixel noise standard deviation and improved spatial resolution within a single image. Advanced Adaptive Image Filters allow for maintenance of spatial resolution while reducing image noise. Examples of adaptive image space filters include Neuro 3-D filters and Cardiac Noise Reduction Filters. AutomA adjusts mA along the z-axis and is the CT equivalent of auto exposure control in conventional x-ray systems. Dynamic Z-axis Tracking offers an additional opportunity for dose reduction in helical acquisitions while SmartTrack Z-axis Tracking serves to ensure beam, collimator and detector alignment during tube rotation. SmartmA provides angular mA modulation. ECG Helical Modulation reduces mA during the systolic phase of the heart cycle. SmartBeam optimization uses bowtie beam-shaping hardware and software to filter off-axis x-rays - minimizing dose and reducing x-ray scatter. The DICOM Radiation Dose Structured Report (RDSR) generates a dose report at the conclusion of every examination. Dose Check preemptively notifies CT operators when scan parameters exceed user-defined dose thresholds. DoseWatch is an information technology application providing vendor-agnostic dose tracking and analysis for CT (and all other diagnostic x-ray modalities) SnapShot Pulse improves coronary CTA dose management. VolumeShuttle uses two acquisitions to increase coverage, decrease dose, and conserve on contrast administration. Color-Coding for Kids applies the Broselow-Luten Pediatric System to facilitate pediatric emergency care and reduce medical errors. FeatherLight achieves dose optimization through pediatric procedure-based protocols. Adventure Series scanners provide a child-friendly imaging environment promoting patient cooperation with resultant reduction in retakes and patient motion. Philips CT Dose Optimization Tools and Advanced Reconstruction Presentation Time: 11:45 ‘ 12:15 PM The first part of the talk will cover “Dose Reduction and Dose Optimization Technologies” present in Philips CT Scanners. The main Technologies to be presented include: DoseRight and tube current modulation (DoseRight, Z-DOM, 3D-DOM, DoseRight Cardiac) Special acquisition modes Beam filtration and beam shapers Eclipse collimator and ClearRay collimator NanoPanel detector DoseRight will cover automatic tube current selection that automatically adjusts the dose for the individual patient. The presentation will explore the modulation techniques currently employed in Philips CT scanners and will include the algorithmic concepts as well as illustrative examples. Modulation and current selection technologies to be covered include the Automatic Current Selection component of DoseRight, ZDOM longitudinal dose modulation, 3D-DOM (combination of longitudinal and rotational dose modulation), Cardiac Dose right (an ECG based dose modulation scheme), and the DoseRight Index (DRI) IQ index. The special acquisition modes covers acquisition techniques such as prospective gating that is designed to reduce exposure to the patient through the Cardiac Step and Shoot scan mode. This mode can substitute the much higher dose retrospective scan modes for certain types of cardiac imaging. The beam filtration and beam shaper portion will discuss the variety of filtration and beam shaping configurations available on Philips scanners. This topic includes the x-ray beam characteristics, tube filtration as well as dose compensator characteristics. The Eclipse collimator, ClearRay collimator and the NanoPanel detector portion will discuss additional technologies specific to wide coverage CT that address some of the unique challenges encountered and techniques employed to optimize image quality and optimize dose utilization. The Eclipse collimator reduces extraneous exposure by actively blocking the radiation tails at either end of helical scans that do not contribute to the image generation. The ClearRay collimator and the NanoPanel detector optimize the quality of the signal that reaches the detectors by addressing the increased scattered radiation present in wide coverage and the NanoPanel detector adds superior electronic noise characteristics valuable when imaging at a low dose level. The second part of the talk will present “Advanced Reconstruction Technologies” currently available on Philips CT Scanners. The talk will cover filtered back projection (FBP), iDose4 and Iterative Model Reconstruction (IMR). Each reconstruction method will include a discussion of the algorithm as well as similarities and differences between the algorithms. Examples illustrating the merits of each algorithm will be presented, and techniques and metrics to characterize the performance of each type of algorithm will be presented. The Filtered Back projection portion will discuss and provide a brief summary of relevant standard image reconstruction techniques in common use, and discuss the common tradeoffs when using the FBP algorithm. The iDose4 portion will present the algorithms used for iDose4 as well the different levels. The meaning of different levels of iDose4 available will be presented and quantified. Guidelines for selection iDose4 parameters based on the imaging need will be explained. The different image quality goals available with iDose4 and specifically how iDose4 enables noise reduction, spatial resolution improvement or both will be explained. The approaches to leveraging the benefits of iDose4 such as improved spatial resolution, decreased noise, and artifact prevention will be described and quantified; and measurements and metrics behind the improvements will be presented. The image quality benefits in specific imaging situations as well as how to best combine the technology with other dose reduction strategies to ensure the best image quality at a given dose level will be presented. Insight into the IMR algorithm as well as contrast to the iDose4 techniques and performance characteristics will be discussed. Metrics and techniques for characterizing this class of algorithm and IQ performance will be presented. The image quality benefits and the dose reduction capabilities of IMR will be explored. Illustrative examples of the noise reduction, spatial resolution improvement, and low contrast detectability improvements of the reconstruction method will be presented: clinical cases and phantom measurements demonstrating the benefits of IMR in the areas of low dose imaging, spatial resolution and low contrast resolution are discussed and the technical details behind the measurements will be presented compared to both iDose4 and traditional filtered back projection (FBP)« less

  2. SU-C-207-05: A Comparative Study of Noise-Reduction Algorithms for Low-Dose Cone-Beam Computed Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, S; Yao, W

    2015-06-15

    Purpose: To study different noise-reduction algorithms and to improve the image quality of low dose cone beam CT for patient positioning in radiation therapy. Methods: In low-dose cone-beam CT, the reconstructed image is contaminated with excessive quantum noise. In this study, three well-developed noise reduction algorithms namely, a) penalized weighted least square (PWLS) method, b) split-Bregman total variation (TV) method, and c) compressed sensing (CS) method were studied and applied to the images of a computer–simulated “Shepp-Logan” phantom and a physical CATPHAN phantom. Up to 20% additive Gaussian noise was added to the Shepp-Logan phantom. The CATPHAN phantom was scannedmore » by a Varian OBI system with 100 kVp, 4 ms and 20 mA. For comparing the performance of these algorithms, peak signal-to-noise ratio (PSNR) of the denoised images was computed. Results: The algorithms were shown to have the potential in reducing the noise level for low-dose CBCT images. For Shepp-Logan phantom, an improvement of PSNR of 2 dB, 3.1 dB and 4 dB was observed using PWLS, TV and CS respectively, while for CATPHAN, the improvement was 1.2 dB, 1.8 dB and 2.1 dB, respectively. Conclusion: Penalized weighted least square, total variation and compressed sensing methods were studied and compared for reducing the noise on a simulated phantom and a physical phantom scanned by low-dose CBCT. The techniques have shown promising results for noise reduction in terms of PSNR improvement. However, reducing the noise without compromising the smoothness and resolution of the image needs more extensive research.« less

  3. Impact of dose engine algorithm in pencil beam scanning proton therapy for breast cancer.

    PubMed

    Tommasino, Francesco; Fellin, Francesco; Lorentini, Stefano; Farace, Paolo

    2018-06-01

    Proton therapy for the treatment of breast cancer is acquiring increasing interest, due to the potential reduction of radiation-induced side effects such as cardiac and pulmonary toxicity. While several in silico studies demonstrated the gain in plan quality offered by pencil beam scanning (PBS) compared to passive scattering techniques, the related dosimetric uncertainties have been poorly investigated so far. Five breast cancer patients were planned with Raystation 6 analytical pencil beam (APB) and Monte Carlo (MC) dose calculation algorithms. Plans were optimized with APB and then MC was used to recalculate dose distribution. Movable snout and beam splitting techniques (i.e. using two sub-fields for the same beam entrance, one with and the other without the use of a range shifter) were considered. PTV dose statistics were recorded. The same planning configurations were adopted for the experimental benchmark. Dose distributions were measured with a 2D array of ionization chambers and compared to APB and MC calculated ones by means of a γ analysis (agreement criteria 3%, 3 mm). Our results indicate that, when using proton PBS for breast cancer treatment, the Raystation 6 APB algorithm does not allow obtaining sufficient accuracy, especially with large air gaps. On the contrary, the MC algorithm resulted into much higher accuracy in all beam configurations tested and has to be recommended. Centers where a MC algorithm is not yet available should consider a careful use of APB, possibly combined with a movable snout system or in any case with strategies aimed at minimizing air gaps. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  4. Validation of a Preclinical Spinal Safety Model: Effects of Intrathecal Morphine in the Neonatal Rat

    PubMed Central

    Westin, B. David; Walker, Suellen M.; Deumens, Ronald; Grafe, Marjorie; Yaksh, Tony L.

    2010-01-01

    Background Preclinical studies demonstrate increased neuroapoptosis after general anesthesia in early life. Neuraxial techniques may minimize potential risks, but there has been no systematic evaluation of spinal analgesic safety in developmental models. We aimed to validate a preclinical model for evaluating dose-dependent efficacy, spinal cord toxicity, and long term function following intrathecal morphine in the neonatal rat. Methods Lumbar intrathecal injections were performed in anesthetized rats aged postnatal day (P)3, 10 and 21. The relationship between injectate volume and segmental spread was assessed post mortem and by in-vivo imaging. To determine the antinociceptive dose, mechanical withdrawal thresholds were measured at baseline and 30 minutes following intrathecal morphine. To evaluate toxicity, doses up to the maximum tolerated were administered, and spinal cord histopathology, apoptosis and glial response were evaluated 1 and 7 days following P3 or P21 injection. Sensory thresholds and gait analysis were evaluated at P35. Results Intrathecal injection can be reliably performed at all postnatal ages and injectate volume influences segmental spread. Intrathecal morphine produced spinally-mediated analgesia at all ages with lower dose requirements in younger pups. High dose intrathecal morphine did not produce signs of spinal cord toxicity or alter long-term function. Conclusions The therapeutic ratio for intrathecal morphine (toxic dose / antinociceptive dose) was at least 300 at P3, and at least 20 at P21 (latter doses limited by side effects). This data provides relative efficacy and safety data for comparison with other analgesic preparations and contributes supporting evidence for the validity of this preclinical neonatal safety model. PMID:20526189

  5. Validation of a preclinical spinal safety model: effects of intrathecal morphine in the neonatal rat.

    PubMed

    Westin, B David; Walker, Suellen M; Deumens, Ronald; Grafe, Marjorie; Yaksh, Tony L

    2010-07-01

    Preclinical studies demonstrate increased neuroapoptosis after general anesthesia in early life. Neuraxial techniques may minimize potential risks, but there has been no systematic evaluation of spinal analgesic safety in developmental models. We aimed to validate a preclinical model for evaluating dose-dependent efficacy, spinal cord toxicity, and long-term function after intrathecal morphine in the neonatal rat. Lumbar intrathecal injections were performed in anesthetized rats aged postnatal day (P) 3, 10, and 21. The relationship between injectate volume and segmental spread was assessed postmortem and by in vivo imaging. To determine the antinociceptive dose, mechanical withdrawal thresholds were measured at baseline and 30 min after intrathecal morphine. To evaluate toxicity, doses up to the maximum tolerated were administered, and spinal cord histopathology, apoptosis, and glial response were evaluated 1 and 7 days after P3 or P21 injection. Sensory thresholds and gait analysis were evaluated at P35. Intrathecal injection can be reliably performed at all postnatal ages and injectate volume influences segmental spread. Intrathecal morphine produced spinally mediated analgesia at all ages with lower dose requirements in younger pups. High-dose intrathecal morphine did not produce signs of spinal cord toxicity or alter long-term function. The therapeutic ratio for intrathecal morphine (toxic dose/antinociceptive dose) was at least 300 at P3 and at least 20 at P21 (latter doses limited by side effects). These data provide relative efficacy and safety for comparison with other analgesic preparations and contribute supporting evidence for the validity of this preclinical neonatal safety model.

  6. Fragmenting networks by targeting collective influencers at a mesoscopic level.

    PubMed

    Kobayashi, Teruyoshi; Masuda, Naoki

    2016-11-25

    A practical approach to protecting networks against epidemic processes such as spreading of infectious diseases, malware, and harmful viral information is to remove some influential nodes beforehand to fragment the network into small components. Because determining the optimal order to remove nodes is a computationally hard problem, various approximate algorithms have been proposed to efficiently fragment networks by sequential node removal. Morone and Makse proposed an algorithm employing the non-backtracking matrix of given networks, which outperforms various existing algorithms. In fact, many empirical networks have community structure, compromising the assumption of local tree-like structure on which the original algorithm is based. We develop an immunization algorithm by synergistically combining the Morone-Makse algorithm and coarse graining of the network in which we regard a community as a supernode. In this way, we aim to identify nodes that connect different communities at a reasonable computational cost. The proposed algorithm works more efficiently than the Morone-Makse and other algorithms on networks with community structure.

  7. Fragmenting networks by targeting collective influencers at a mesoscopic level

    NASA Astrophysics Data System (ADS)

    Kobayashi, Teruyoshi; Masuda, Naoki

    2016-11-01

    A practical approach to protecting networks against epidemic processes such as spreading of infectious diseases, malware, and harmful viral information is to remove some influential nodes beforehand to fragment the network into small components. Because determining the optimal order to remove nodes is a computationally hard problem, various approximate algorithms have been proposed to efficiently fragment networks by sequential node removal. Morone and Makse proposed an algorithm employing the non-backtracking matrix of given networks, which outperforms various existing algorithms. In fact, many empirical networks have community structure, compromising the assumption of local tree-like structure on which the original algorithm is based. We develop an immunization algorithm by synergistically combining the Morone-Makse algorithm and coarse graining of the network in which we regard a community as a supernode. In this way, we aim to identify nodes that connect different communities at a reasonable computational cost. The proposed algorithm works more efficiently than the Morone-Makse and other algorithms on networks with community structure.

  8. Fragmenting networks by targeting collective influencers at a mesoscopic level

    PubMed Central

    Kobayashi, Teruyoshi; Masuda, Naoki

    2016-01-01

    A practical approach to protecting networks against epidemic processes such as spreading of infectious diseases, malware, and harmful viral information is to remove some influential nodes beforehand to fragment the network into small components. Because determining the optimal order to remove nodes is a computationally hard problem, various approximate algorithms have been proposed to efficiently fragment networks by sequential node removal. Morone and Makse proposed an algorithm employing the non-backtracking matrix of given networks, which outperforms various existing algorithms. In fact, many empirical networks have community structure, compromising the assumption of local tree-like structure on which the original algorithm is based. We develop an immunization algorithm by synergistically combining the Morone-Makse algorithm and coarse graining of the network in which we regard a community as a supernode. In this way, we aim to identify nodes that connect different communities at a reasonable computational cost. The proposed algorithm works more efficiently than the Morone-Makse and other algorithms on networks with community structure. PMID:27886251

  9. MO-C-17A-11: A Segmentation and Point Matching Enhanced Deformable Image Registration Method for Dose Accumulation Between HDR CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhen, X; Chen, H; Zhou, L

    2014-06-15

    Purpose: To propose and validate a novel and accurate deformable image registration (DIR) scheme to facilitate dose accumulation among treatment fractions of high-dose-rate (HDR) gynecological brachytherapy. Method: We have developed a method to adapt DIR algorithms to gynecologic anatomies with HDR applicators by incorporating a segmentation step and a point-matching step into an existing DIR framework. In the segmentation step, random walks algorithm is used to accurately segment and remove the applicator region (AR) in the HDR CT image. A semi-automatic seed point generation approach is developed to obtain the incremented foreground and background point sets to feed the randommore » walks algorithm. In the subsequent point-matching step, a feature-based thin-plate spline-robust point matching (TPS-RPM) algorithm is employed for AR surface point matching. With the resulting mapping, a DVF characteristic of the deformation between the two AR surfaces is generated by B-spline approximation, which serves as the initial DVF for the following Demons DIR between the two AR-free HDR CT images. Finally, the calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. Results: The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative results as well as the visual inspection of the DIR indicate that our proposed method can suppress the interference of the applicator with the DIR algorithm, and accurately register HDR CT images as well as deform and add interfractional HDR doses. Conclusions: We have developed a novel and robust DIR scheme that can perform registration between HDR gynecological CT images and yield accurate registration results. This new DIR scheme has potential for accurate interfractional HDR dose accumulation. This work is supported in part by the National Natural ScienceFoundation of China (no 30970866 and no 81301940)« less

  10. TH-E-BRE-05: Analysis of Dosimetric Characteristics in Two Leaf Motion Calculator Algorithms for Sliding Window IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, L; Huang, B; Rowedder, B

    Purpose: The Smart leaf motion calculator (SLMC) in Eclipse treatment planning system is an advanced fluence delivery modeling algorithm as it takes into account fine MLC features including inter-leaf leakage, rounded leaf tips, non-uniform leaf thickness, and the spindle cavity etc. In this study, SLMC and traditional Varian LMC (VLMC) algorithms were investigated, for the first time, in dosimetric characteristics and delivery accuracy of sliding window (SW) IMRT. Methods: The SW IMRT plans of 51 cancer cases were included to evaluate dosimetric characteristics and dose delivery accuracy from leaf motion calculated by SLMC and VLMC, respectively. All plans were deliveredmore » using a Varian TrueBeam Linac. The DVH and MUs of the plans were analyzed. Three patient specific QA tools - independent dose calculation software IMSure, Delta4 phantom, and EPID portal dosimetry were also used to measure the delivered dose distribution. Results: Significant differences in the MUs were observed between the two LMCs (p≤0.001).Gamma analysis shows an excellent agreement between the planned dose distribution calculated by both LMC algorithms and delivered dose distribution measured by three QA tools in all plans at 3%/3 mm, leading to a mean pass rate exceeding 97%. The mean fraction of pixels with gamma < 1 of SLMC is slightly lower than that of VLMC in the IMSure and Delta4 results, but higher in portal dosimetry (the highest spatial resolution), especially in complex cases such as nasopharynx. Conclusion: The study suggests that the two LMCs generates the similar target coverage and sparing patterns of critical structures. However, SLMC is modestly more accurate than VLMC in modeling advanced MLC features, which may lead to a more accurate dose delivery in SW IMRT. Current clinical QA tools might not be specific enough to differentiate the dosimetric discrepancies at the millimeter level calculated by these two LMC algorithms. NIH/NIGMS grant U54 GM104944, Lincy Endowed Assistant Professorship.« less

  11. Identifying multiple influential spreaders based on generalized closeness centrality

    NASA Astrophysics Data System (ADS)

    Liu, Huan-Li; Ma, Chuang; Xiang, Bing-Bing; Tang, Ming; Zhang, Hai-Feng

    2018-02-01

    To maximize the spreading influence of multiple spreaders in complex networks, one important fact cannot be ignored: the multiple spreaders should be dispersively distributed in networks, which can effectively reduce the redundance of information spreading. For this purpose, we define a generalized closeness centrality (GCC) index by generalizing the closeness centrality index to a set of nodes. The problem converts to how to identify multiple spreaders such that an objective function has the minimal value. By comparing with the K-means clustering algorithm, we find that the optimization problem is very similar to the problem of minimizing the objective function in the K-means method. Therefore, how to find multiple nodes with the highest GCC value can be approximately solved by the K-means method. Two typical transmission dynamics-epidemic spreading process and rumor spreading process are implemented in real networks to verify the good performance of our proposed method.

  12. SU-F-T-268: A Feasibility Study of Independent Dose Verification for Vero4DRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamashita, M; Kokubo, M; Institute of Biomedical Research and Innovation, Kobe, Hyogo

    2016-06-15

    Purpose: Vero4DRT (Mitsubishi Heavy Industries Ltd.) has been released for a few years. The treatment planning system (TPS) of Vero4DRT is dedicated, so the measurement is the only method of dose verification. There have been no reports of independent dose verification using Clarksonbased algorithm for Vero4DRT. An independent dose verification software program of the general-purpose linac using a modified Clarkson-based algorithm was modified for Vero4DRT. In this study, we evaluated the accuracy of independent dose verification program and the feasibility of the secondary check for Vero4DRT. Methods: iPlan (Brainlab AG) was used as the TPS. PencilBeam Convolution was used formore » dose calculation algorithm of IMRT and X-ray Voxel Monte Carlo was used for the others. Simple MU Analysis (SMU, Triangle Products, Japan) was used as the independent dose verification software program in which CT-based dose calculation was performed using a modified Clarkson-based algorithm. In this study, 120 patients’ treatment plans were collected in our institute. The treatments were performed using the conventional irradiation for lung and prostate, SBRT for lung and Step and shoot IMRT for prostate. Comparison in dose between the TPS and the SMU was done and confidence limits (CLs, Mean ± 2SD %) were compared to those from the general-purpose linac. Results: As the results of the CLs, the conventional irradiation (lung, prostate), SBRT (lung) and IMRT (prostate) show 2.2 ± 3.5% (CL of the general-purpose linac: 2.4 ± 5.3%), 1.1 ± 1.7% (−0.3 ± 2.0%), 4.8 ± 3.7% (5.4 ± 5.3%) and −0.5 ± 2.5% (−0.1 ± 3.6%), respectively. The CLs for Vero4DRT show similar results to that for the general-purpose linac. Conclusion: The independent dose verification for the new linac is clinically available as a secondary check and we performed the check with the similar tolerance level of the general-purpose linac. This research is partially supported by Japan Agency for Medical Research and Development (AMED)« less

  13. SU-F-P-39: End-To-End Validation of a 6 MV High Dose Rate Photon Beam, Configured for Eclipse AAA Algorithm Using Golden Beam Data, for SBRT Treatments Using RapidArc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreyra, M; Salinas Aranda, F; Dodat, D

    Purpose: To use end-to-end testing to validate a 6 MV high dose rate photon beam, configured for Eclipse AAA algorithm using Golden Beam Data (GBD), for SBRT treatments using RapidArc. Methods: Beam data was configured for Varian Eclipse AAA algorithm using the GBD provided by the vendor. Transverse and diagonals dose profiles, PDDs and output factors down to a field size of 2×2 cm2 were measured on a Varian Trilogy Linac and compared with GBD library using 2% 2mm 1D gamma analysis. The MLC transmission factor and dosimetric leaf gap were determined to characterize the MLC in Eclipse. Mechanical andmore » dosimetric tests were performed combining different gantry rotation speeds, dose rates and leaf speeds to evaluate the delivery system performance according to VMAT accuracy requirements. An end-to-end test was implemented planning several SBRT RapidArc treatments on a CIRS 002LFC IMRT Thorax Phantom. The CT scanner calibration curve was acquired and loaded in Eclipse. PTW 31013 ionization chamber was used with Keithley 35617EBS electrometer for absolute point dose measurements in water and lung equivalent inserts. TPS calculated planar dose distributions were compared to those measured using EPID and MapCheck, as an independent verification method. Results were evaluated with gamma criteria of 2% dose difference and 2mm DTA for 95% of points. Results: GBD set vs. measured data passed 2% 2mm 1D gamma analysis even for small fields. Machine performance tests show results are independent of machine delivery configuration, as expected. Absolute point dosimetry comparison resulted within 4% for the worst case scenario in lung. Over 97% of the points evaluated in dose distributions passed gamma index analysis. Conclusion: Eclipse AAA algorithm configuration of the 6 MV high dose rate photon beam using GBD proved efficient. End-to-end test dose calculation results indicate it can be used clinically for SBRT using RapidArc.« less

  14. Clinical implementation of AXB from AAA for breast: Plan quality and subvolume analysis.

    PubMed

    Guebert, Alexandra; Conroy, Leigh; Weppler, Sarah; Alghamdi, Majed; Conway, Jessica; Harper, Lindsay; Phan, Tien; Olivotto, Ivo A; Smith, Wendy L; Quirk, Sarah

    2018-05-01

    Two dose calculation algorithms are available in Varian Eclipse software: Anisotropic Analytical Algorithm (AAA) and Acuros External Beam (AXB). Many Varian Eclipse-based centers have access to AXB; however, a thorough understanding of how it will affect plan characteristics and, subsequently, clinical practice is necessary prior to implementation. We characterized the difference in breast plan quality between AXB and AAA for dissemination to clinicians during implementation. Locoregional irradiation plans were created with AAA for 30 breast cancer patients with a prescription dose of 50 Gy to the breast and 45 Gy to the regional node, in 25 fractions. The internal mammary chain (IMC CTV ) nodes were covered by 80% of the breast dose. AXB, both dose-to-water and dose-to-medium reporting, was used to recalculate plans while maintaining constant monitor units. Target coverage and organ-at-risk doses were compared between the two algorithms using dose-volume parameters. An analysis to assess location-specific changes was performed by dividing the breast into nine subvolumes in the superior-inferior and left-right directions. There were minimal differences found between the AXB and AAA calculated plans. The median difference between AXB and AAA for breast CTV V 95% , was <2.5%. For IMC CTV , the median differences V 95% , and V 80% were <5% and 0%, respectively; indicating IMC CTV coverage only decreased when marginally covered. Mean superficial dose increased by a median of 3.2 Gy. In the subvolume analysis, the medial subvolumes were "hotter" when recalculated with AXB and the lateral subvolumes "cooler" with AXB; however, all differences were within 2 Gy. We observed minimal difference in magnitude and spatial distribution of dose when comparing the two algorithms. The largest observable differences occurred in superficial dose regions. Therefore, clinical implementation of AXB from AAA for breast radiotherapy is not expected to result in changes in clinical practice for prescribing or planning breast radiotherapy. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  15. Developing a Treatment Planning Software Based on TG-43U1 Formalism for Cs-137 LDR Brachytherapy.

    PubMed

    Sina, Sedigheh; Faghihi, Reza; Soleimani Meigooni, Ali; Siavashpour, Zahra; Mosleh-Shirazi, Mohammad Amin

    2013-08-01

    The old Treatment Planning Systems (TPSs) used for intracavitary brachytherapy with Cs-137 Selectron source utilize traditional dose calculation methods, considering each source as a point source. Using such methods introduces significant errors in dose estimation. As of 1995, TG-43 is used as the main dose calculation formalism in treatment TPSs. The purpose of this study is to design and establish a treatment planning software for Cs-137 Solectron brachytherapy source, based on TG-43U1 formalism by applying the effects of the applicator and dummy spacers. Two softwares used for treatment planning of Cs-137 sources in Iran (STPS and PLATO), are based on old formalisms. The purpose of this work is to establish and develop a TPS for Selectron source based on TG-43 formalism. In this planning system, the dosimetry parameters of each pellet in different places inside applicators were obtained by MCNP4c code. Then the dose distribution around every combination of active and inactive pellets was obtained by summing the doses. The accuracy of this algorithm was checked by comparing its results for special combination of active and inactive pellets with MC simulations. Finally, the uncertainty of old dose calculation formalism was investigated by comparing the results of STPS and PLATO softwares with those obtained by the new algorithm. For a typical arrangement of 10 active pellets in the applicator, the percentage difference between doses obtained by the new algorithm at 1cm distance from the tip of the applicator and those obtained by old formalisms is about 30%, while the difference between the results of MCNP and the new algorithm is less than 5%. According to the results, the old dosimetry formalisms, overestimate the dose especially towards the applicator's tip. While the TG-43U1 based software perform the calculations more accurately.

  16. SU-E-J-92: Validating Dose Uncertainty Estimates Produced by AUTODIRECT, An Automated Program to Evaluate Deformable Image Registration Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, H; Chen, J; Pouliot, J

    2015-06-15

    Purpose: Deformable image registration (DIR) is a powerful tool with the potential to deformably map dose from one computed-tomography (CT) image to another. Errors in the DIR, however, will produce errors in the transferred dose distribution. We have proposed a software tool, called AUTODIRECT (automated DIR evaluation of confidence tool), which predicts voxel-specific dose mapping errors on a patient-by-patient basis. This work validates the effectiveness of AUTODIRECT to predict dose mapping errors with virtual and physical phantom datasets. Methods: AUTODIRECT requires 4 inputs: moving and fixed CT images and two noise scans of a water phantom (for noise characterization). Then,more » AUTODIRECT uses algorithms to generate test deformations and applies them to the moving and fixed images (along with processing) to digitally create sets of test images, with known ground-truth deformations that are similar to the actual one. The clinical DIR algorithm is then applied to these test image sets (currently 4) . From these tests, AUTODIRECT generates spatial and dose uncertainty estimates for each image voxel based on a Student’s t distribution. This work compares these uncertainty estimates to the actual errors made by the Velocity Deformable Multi Pass algorithm on 11 virtual and 1 physical phantom datasets. Results: For 11 of the 12 tests, the predicted dose error distributions from AUTODIRECT are well matched to the actual error distributions within 1–6% for 10 virtual phantoms, and 9% for the physical phantom. For one of the cases though, the predictions underestimated the errors in the tail of the distribution. Conclusion: Overall, the AUTODIRECT algorithm performed well on the 12 phantom cases for Velocity and was shown to generate accurate estimates of dose warping uncertainty. AUTODIRECT is able to automatically generate patient-, organ- , and voxel-specific DIR uncertainty estimates. This ability would be useful for patient-specific DIR quality assurance.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, C; Zhang, H; Chen, Y

    Purpose: Recently, compressed sensing (CS) based iterative reconstruction (IR) method is receiving attentions to reconstruct high quality cone beam computed tomography (CBCT) images using sparsely sampled or noisy projections. The aim of this study is to develop a novel baseline algorithm called Mask Guided Image Reconstruction (MGIR), which can provide superior image quality for both low-dose 3DCBCT and 4DCBCT under single mathematical framework. Methods: In MGIR, the unknown CBCT volume was mathematically modeled as a combination of two regions where anatomical structures are 1) within the priori-defined mask and 2) outside the mask. Then we update each part of imagesmore » alternatively thorough solving minimization problems based on CS type IR. For low-dose 3DCBCT, the former region is defined as the anatomically complex region where it is focused to preserve edge information while latter region is defined as contrast uniform, and hence aggressively updated to remove noise/artifact. In 4DCBCT, the regions are separated as the common static part and moving part. Then, static volume and moving volumes were updated with global and phase sorted projection respectively, to optimize the image quality of both moving and static part simultaneously. Results: Examination of MGIR algorithm showed that high quality of both low-dose 3DCBCT and 4DCBCT images can be reconstructed without compromising the image resolution and imaging dose or scanning time respectively. For low-dose 3DCBCT, a clinical viable and high resolution head-and-neck image can be obtained while cutting the dose by 83%. In 4DCBCT, excellent quality 4DCBCT images could be reconstructed while requiring no more projection data and imaging dose than a typical clinical 3DCBCT scan. Conclusion: The results shown that the image quality of MGIR was superior compared to other published CS based IR algorithms for both 4DCBCT and low-dose 3DCBCT. This makes our MGIR algorithm potentially useful in various on-line clinical applications. Provisional Patent: UF#15476; WGS Ref. No. U1198.70067US00.« less

  18. Dosimetric Impact of Using the Acuros XB Algorithm for Intensity Modulated Radiation Therapy and RapidArc Planning in Nasopharyngeal Carcinomas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kan, Monica W.K., E-mail: kanwkm@ha.org.hk; Department of Physics and Materials Science, City University of Hong Kong, Hong Kong; Leung, Lucullus H.T.

    2013-01-01

    Purpose: To assess the dosimetric implications for the intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy with RapidArc (RA) of nasopharyngeal carcinomas (NPC) due to the use of the Acuros XB (AXB) algorithm versus the anisotropic analytical algorithm (AAA). Methods and Materials: Nine-field sliding window IMRT and triple-arc RA plans produced for 12 patients with NPC using AAA were recalculated using AXB. The dose distributions to multiple planning target volumes (PTVs) with different prescribed doses and critical organs were compared. The PTVs were separated into components in bone, air, and tissue. The change of doses by AXB duemore » to air and bone, and the variation of the amount of dose changes with number of fields was also studied using simple geometric phantoms. Results: Using AXB instead of AAA, the averaged mean dose to PTV{sub 70} (70 Gy was prescribed to PTV{sub 70}) was found to be 0.9% and 1.2% lower for IMRT and RA, respectively. It was approximately 1% lower in tissue, 2% lower in bone, and 1% higher in air. The averaged minimum dose to PTV{sub 70} in bone was approximately 4% lower for both IMRT and RA, whereas it was approximately 1.5% lower for PTV{sub 70} in tissue. The decrease in target doses estimated by AXB was mostly contributed from the presence of bone, less from tissue, and none from air. A similar trend was observed for PTV{sub 60} (60 Gy was prescribed to PTV{sub 60}). The doses to most serial organs were found to be 1% to 3% lower and to other organs 4% to 10% lower for both techniques. Conclusions: The use of the AXB algorithm is highly recommended for IMRT and RapidArc planning for NPC cases.« less

  19. Adaptive control of nonlinear system using online error minimum neural networks.

    PubMed

    Jia, Chao; Li, Xiaoli; Wang, Kang; Ding, Dawei

    2016-11-01

    In this paper, a new learning algorithm named OEM-ELM (Online Error Minimized-ELM) is proposed based on ELM (Extreme Learning Machine) neural network algorithm and the spreading of its main structure. The core idea of this OEM-ELM algorithm is: online learning, evaluation of network performance, and increasing of the number of hidden nodes. It combines the advantages of OS-ELM and EM-ELM, which can improve the capability of identification and avoid the redundancy of networks. The adaptive control based on the proposed algorithm OEM-ELM is set up which has stronger adaptive capability to the change of environment. The adaptive control of chemical process Continuous Stirred Tank Reactor (CSTR) is also given for application. The simulation results show that the proposed algorithm with respect to the traditional ELM algorithm can avoid network redundancy and improve the control performance greatly. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  20. RESOLVE: A new algorithm for aperture synthesis imaging of extended emission in radio astronomy

    NASA Astrophysics Data System (ADS)

    Junklewitz, H.; Bell, M. R.; Selig, M.; Enßlin, T. A.

    2016-02-01

    We present resolve, a new algorithm for radio aperture synthesis imaging of extended and diffuse emission in total intensity. The algorithm is derived using Bayesian statistical inference techniques, estimating the surface brightness in the sky assuming a priori log-normal statistics. resolve estimates the measured sky brightness in total intensity, and the spatial correlation structure in the sky, which is used to guide the algorithm to an optimal reconstruction of extended and diffuse sources. During this process, the algorithm succeeds in deconvolving the effects of the radio interferometric point spread function. Additionally, resolve provides a map with an uncertainty estimate of the reconstructed surface brightness. Furthermore, with resolve we introduce a new, optimal visibility weighting scheme that can be viewed as an extension to robust weighting. In tests using simulated observations, the algorithm shows improved performance against two standard imaging approaches for extended sources, Multiscale-CLEAN and the Maximum Entropy Method.

  1. Optimization of the double dosimetry algorithm for interventional cardiologists

    NASA Astrophysics Data System (ADS)

    Chumak, Vadim; Morgun, Artem; Bakhanova, Elena; Voloskiy, Vitalii; Borodynchik, Elena

    2014-11-01

    A double dosimetry method is recommended in interventional cardiology (IC) to assess occupational exposure; yet currently there is no common and universal algorithm for effective dose estimation. In this work, flexible and adaptive algorithm building methodology was developed and some specific algorithm applicable for typical irradiation conditions of IC procedures was obtained. It was shown that the obtained algorithm agrees well with experimental measurements and is less conservative compared to other known algorithms.

  2. SU-E-I-01: Iterative CBCT Reconstruction with a Feature-Preserving Penalty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyu, Q; Li, B; Southern Medical University, Guangzhou

    2015-06-15

    Purpose: Low-dose CBCT is desired in various clinical applications. Iterative image reconstruction algorithms have shown advantages in suppressing noise in low-dose CBCT. However, due to the smoothness constraint enforced during the reconstruction process, edges may be blurred and image features may lose in the reconstructed image. In this work, we proposed a new penalty design to preserve image features in the image reconstructed by iterative algorithms. Methods: Low-dose CBCT is reconstructed by minimizing the penalized weighted least-squares (PWLS) objective function. Binary Robust Independent Elementary Features (BRIEF) of the image were integrated into the penalty of PWLS. BRIEF is a generalmore » purpose point descriptor that can be used to identify important features of an image. In this work, BRIEF distance of two neighboring pixels was used to weigh the smoothing parameter in PWLS. For pixels of large BRIEF distance, weaker smooth constraint will be enforced. Image features will be better preserved through such a design. The performance of the PWLS algorithm with BRIEF penalty was evaluated by a CatPhan 600 phantom. Results: The image quality reconstructed by the proposed PWLS-BRIEF algorithm is superior to that by the conventional PWLS method and the standard FDK method. At matched noise level, edges in PWLS-BRIEF reconstructed image are better preserved. Conclusion: This study demonstrated that the proposed PWLS-BRIEF algorithm has great potential on preserving image features in low-dose CBCT.« less

  3. A Simplified Algorithm for Statistical Investigation of Damage Spreading

    NASA Astrophysics Data System (ADS)

    Gecow, Andrzej

    2009-04-01

    On the way to simulating adaptive evolution of complex system describing a living object or human developed project, a fitness should be defined on node states or network external outputs. Feedbacks lead to circular attractors of these states or outputs which make it difficult to define a fitness. The main statistical effects of adaptive condition are the result of small change tendency and to appear, they only need a statistically correct size of damage initiated by evolutionary change of system. This observation allows to cut loops of feedbacks and in effect to obtain a particular statistically correct state instead of a long circular attractor which in the quenched model is expected for chaotic network with feedback. Defining fitness on such states is simple. We calculate only damaged nodes and only once. Such an algorithm is optimal for investigation of damage spreading i.e. statistical connections of structural parameters of initial change with the size of effected damage. It is a reversed-annealed method—function and states (signals) may be randomly substituted but connections are important and are preserved. The small damages important for adaptive evolution are correctly depicted in comparison to Derrida annealed approximation which expects equilibrium levels for large networks. The algorithm indicates these levels correctly. The relevant program in Pascal, which executes the algorithm for a wide range of parameters, can be obtained from the author.

  4. Extending Three-Dimensional Weighted Cone Beam Filtered Backprojection (CB-FBP) Algorithm for Image Reconstruction in Volumetric CT at Low Helical Pitches

    PubMed Central

    Hsieh, Jiang; Nilsen, Roy A.; McOlash, Scott M.

    2006-01-01

    A three-dimensional (3D) weighted helical cone beam filtered backprojection (CB-FBP) algorithm (namely, original 3D weighted helical CB-FBP algorithm) has already been proposed to reconstruct images from the projection data acquired along a helical trajectory in angular ranges up to [0, 2 π]. However, an overscan is usually employed in the clinic to reconstruct tomographic images with superior noise characteristics at the most challenging anatomic structures, such as head and spine, extremity imaging, and CT angiography as well. To obtain the most achievable noise characteristics or dose efficiency in a helical overscan, we extended the 3D weighted helical CB-FBP algorithm to handle helical pitches that are smaller than 1: 1 (namely extended 3D weighted helical CB-FBP algorithm). By decomposing a helical over scan with an angular range of [0, 2π + Δβ] into a union of full scans corresponding to an angular range of [0, 2π], the extended 3D weighted function is a summation of all 3D weighting functions corresponding to each full scan. An experimental evaluation shows that the extended 3D weighted helical CB-FBP algorithm can improve noise characteristics or dose efficiency of the 3D weighted helical CB-FBP algorithm at a helical pitch smaller than 1: 1, while its reconstruction accuracy and computational efficiency are maintained. It is believed that, such an efficient CB reconstruction algorithm that can provide superior noise characteristics or dose efficiency at low helical pitches may find its extensive applications in CT medical imaging. PMID:23165031

  5. Study of 201 non-small cell lung cancer patients given stereotactic ablative radiation therapy shows local control dependence on dose calculation algorithm.

    PubMed

    Latifi, Kujtim; Oliver, Jasmine; Baker, Ryan; Dilling, Thomas J; Stevens, Craig W; Kim, Jongphil; Yue, Binglin; Demarco, Marylou; Zhang, Geoffrey G; Moros, Eduardo G; Feygelman, Vladimir

    2014-04-01

    Pencil beam (PB) and collapsed cone convolution (CCC) dose calculation algorithms differ significantly when used in the thorax. However, such differences have seldom been previously directly correlated with outcomes of lung stereotactic ablative body radiation (SABR). Data for 201 non-small cell lung cancer patients treated with SABR were analyzed retrospectively. All patients were treated with 50 Gy in 5 fractions of 10 Gy each. The radiation prescription mandated that 95% of the planning target volume (PTV) receive the prescribed dose. One hundred sixteen patients were planned with BrainLab treatment planning software (TPS) with the PB algorithm and treated on a Novalis unit. The other 85 were planned on the Pinnacle TPS with the CCC algorithm and treated on a Varian linac. Treatment planning objectives were numerically identical for both groups. The median follow-up times were 24 and 17 months for the PB and CCC groups, respectively. The primary endpoint was local/marginal control of the irradiated lesion. Gray's competing risk method was used to determine the statistical differences in local/marginal control rates between the PB and CCC groups. Twenty-five patients planned with PB and 4 patients planned with the CCC algorithms to the same nominal doses experienced local recurrence. There was a statistically significant difference in recurrence rates between the PB and CCC groups (hazard ratio 3.4 [95% confidence interval: 1.18-9.83], Gray's test P=.019). The differences (Δ) between the 2 algorithms for target coverage were as follows: ΔD99GITV = 7.4 Gy, ΔD99PTV = 10.4 Gy, ΔV90GITV = 13.7%, ΔV90PTV = 37.6%, ΔD95PTV = 9.8 Gy, and ΔDISO = 3.4 Gy. GITV = gross internal tumor volume. Local control in patients receiving who were planned to the same nominal dose with PB and CCC algorithms were statistically significantly different. Possible alternative explanations are described in the report, although they are not thought likely to explain the difference. We conclude that the difference is due to relative dosimetric underdosing of tumors with the PB algorithm. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devpura, S; Li, H; Liu, C

    Purpose: To correlate dose distributions computed using six algorithms for recurrent early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT), with outcome (local failure). Methods: Of 270 NSCLC patients treated with 12Gyx4, 20 were found to have local recurrence prior to the 2-year time point. These patients were originally planned with 1-D pencil beam (1-D PB) algorithm. 4D imaging was performed to manage tumor motion. Regions of local failures were determined from follow-up PET-CT scans. Follow-up CT images were rigidly fused to the planning CT (pCT), and recurrent tumor volumes (Vrecur) were mapped to themore » pCT. Dose was recomputed, retrospectively, using five algorithms: 3-D PB, collapsed cone convolution (CCC), anisotropic analytical algorithm (AAA), AcurosXB, and Monte Carlo (MC). Tumor control probability (TCP) was computed using the Marsden model (1,2). Patterns of failure were classified as central, in-field, marginal, and distant for Vrecur ≥95% of prescribed dose, 95–80%, 80–20%, and ≤20%, respectively (3). Results: Average PTV D95 (dose covering 95% of the PTV) for 3-D PB, CCC, AAA, AcurosXB, and MC relative to 1-D PB were 95.3±2.1%, 84.1±7.5%, 84.9±5.7%, 86.3±6.0%, and 85.1±7.0%, respectively. TCP values for 1-D PB, 3-D PB, CCC, AAA, AcurosXB, and MC were 98.5±1.2%, 95.7±3.0, 79.6±16.1%, 79.7±16.5%, 81.1±17.5%, and 78.1±20%, respectively. Patterns of local failures were similar for 1-D and 3D PB plans, which predicted that the majority of failures occur in centraldistal regions, with only ∼15% occurring distantly. However, with convolution/superposition and MC type algorithms, the majority of failures (65%) were predicted to be distant, consistent with the literature. Conclusion: Based on MC and convolution/superposition type algorithms, average PTV D95 and TCP were ∼15% lower than the planned 1-D PB dose calculation. Patterns of failure results suggest that MC and convolution/superposition type algorithms predict different outcomes for patterns of failure relative to PB algorithms. Work supported in part by Varian Medical Systems, Palo Alto, CA.« less

  7. LET spectra measurements from the STS-35 CPDs

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Linear energy transfer (LET) spectra derived form automated track analysis system (ATAS) track parameter measurements for crew passive dosimeters (CPD's) flown with the astronauts on STS-35 are plotted. The spread between the seven individual spectra is typical of past manual measurements of sets of CPD's. This difference is probably due to the cumulative net shielding variations experienced by the CPD's as the astronauts carrying them went about their activities on the Space Shuttle. The STS-35 mission was launched on Dec. 2, 1990, at 28.5 degrees inclination and 352-km altitude. This is somewhat higher than the nominal 300-km flights and the orbit intersects more of the high intensity trapped proton region in the South Atlantic Anomaly (SAA). However, in comparison with APD spectra measured on earlier lower altitude missions (STS-26, -29, -30, -32), the flux spectra are all roughly comparable. This may be due to the fact that the STS-35 mission took place close to solar maximum (Feb. 1990), or perhaps to shielding differences. The corresponding dose and dose equivalent spectra for this mission are shown. The effect of statistical fluctuations at the higher LET values, where track densities are small, is very noticeable. This results in an increased spread within the dose rate and dose equivalent rate spectra, as compared to the flux spectra. The contribution to dose and dose equivalent per measured track is much greater in the high LET region and the differences, though numerically small, are heavily weighted in the integral spectra. The optimum measurement and characterization of the high LET tails of the spectra represent an important part of the research into plastic nuclear track detector (PNTD) response. The integral flux, dose rate, dose equivalent rate and mission dose equivalent for the seven astronauts are also given.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, S; Suh, T; Chung, J

    Purpose: The purpose of this study is to evaluate the dosimetric and radiobiological impact of Acuros XB (AXB) and Anisotropic Analytic Algorithm (AAA) dose calculation algorithms on prostate stereotactic body radiation therapy plans with both conventional flattened (FF) and flattening-filter free (FFF) modes. Methods: For thirteen patients with prostate cancer, SBRT planning was performed using 10-MV photon beam with FF and FFF modes. The total dose prescribed to the PTV was 42.7 Gy in 7 fractions. All plans were initially calculated using AAA algorithm in Eclipse treatment planning system (11.0.34), and then were re-calculated using AXB with the same MUsmore » and MLC files. The four types of plans for different algorithms and beam energies were compared in terms of homogeneity and conformity. To evaluate the radiobiological impact, the tumor control probability (TCP) and normal tissue complication probability (NTCP) calculations were performed. Results: For PTV, both calculation algorithms and beam modes lead to comparable homogeneity and conformity. However, the averaged TCP values in AXB plans were always lower than in AAA plans with an average difference of 5.3% and 6.1% for 10-MV FFF and FF beam, respectively. In addition, the averaged NTCP values for organs at risk (OARs) were comparable. Conclusion: This study showed that prostate SBRT plan were comparable dosimetric results with different dose calculation algorithms as well as delivery beam modes. For biological results, even though NTCP values for both calculation algorithms and beam modes were similar, AXB plans produced slightly lower TCP compared to the AAA plans.« less

  9. A generalized time-frequency subtraction method for robust speech enhancement based on wavelet filter banks modeling of human auditory system.

    PubMed

    Shao, Yu; Chang, Chip-Hong

    2007-08-01

    We present a new speech enhancement scheme for a single-microphone system to meet the demand for quality noise reduction algorithms capable of operating at a very low signal-to-noise ratio. A psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise and improve the intelligibility of speech. The proposed method is a generalized time-frequency subtraction algorithm, which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. The wavelet coefficients are used to calculate the Bark spreading energy and temporal spreading energy, from which a time-frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the proposed method. An unvoiced speech enhancement algorithm is also integrated into the system to improve the intelligibility of speech. Through rigorous objective and subjective evaluations, it is shown that the proposed speech enhancement system is capable of reducing noise with little speech degradation in adverse noise environments and the overall performance is superior to several competitive methods.

  10. Research on Formation of Microsatellite Communication with Genetic Algorithm

    PubMed Central

    Wu, Guoqiang; Bai, Yuguang; Sun, Zhaowei

    2013-01-01

    For the formation of three microsatellites which fly in the same orbit and perform three-dimensional solid mapping for terra, this paper proposes an optimizing design method of space circular formation order based on improved generic algorithm and provides an intersatellite direct spread spectrum communication system. The calculating equation of LEO formation flying satellite intersatellite links is guided by the special requirements of formation-flying microsatellite intersatellite links, and the transmitter power is also confirmed throughout the simulation. The method of space circular formation order optimizing design based on improved generic algorithm is given, and it can keep formation order steady for a long time under various absorb impetus. The intersatellite direct spread spectrum communication system is also provided. It can be found that, when the distance is 1 km and the data rate is 1 Mbps, the input wave matches preferably with the output wave. And LDPC code can improve the communication performance. The correct capability of (512, 256) LDPC code is better than (2, 1, 7) convolution code, distinctively. The design system can satisfy the communication requirements of microsatellites. So, the presented method provides a significant theory foundation for formation-flying and intersatellite communication. PMID:24078796

  11. Research on formation of microsatellite communication with genetic algorithm.

    PubMed

    Wu, Guoqiang; Bai, Yuguang; Sun, Zhaowei

    2013-01-01

    For the formation of three microsatellites which fly in the same orbit and perform three-dimensional solid mapping for terra, this paper proposes an optimizing design method of space circular formation order based on improved generic algorithm and provides an intersatellite direct spread spectrum communication system. The calculating equation of LEO formation flying satellite intersatellite links is guided by the special requirements of formation-flying microsatellite intersatellite links, and the transmitter power is also confirmed throughout the simulation. The method of space circular formation order optimizing design based on improved generic algorithm is given, and it can keep formation order steady for a long time under various absorb impetus. The intersatellite direct spread spectrum communication system is also provided. It can be found that, when the distance is 1 km and the data rate is 1 Mbps, the input wave matches preferably with the output wave. And LDPC code can improve the communication performance. The correct capability of (512, 256) LDPC code is better than (2, 1, 7) convolution code, distinctively. The design system can satisfy the communication requirements of microsatellites. So, the presented method provides a significant theory foundation for formation-flying and intersatellite communication.

  12. Dosimetric verification of radiotherapy treatment planning systems in Serbia: national audit

    PubMed Central

    2012-01-01

    Background Independent external audits play an important role in quality assurance programme in radiation oncology. The audit supported by the IAEA in Serbia was designed to review the whole chain of activities in 3D conformal radiotherapy (3D-CRT) workflow, from patient data acquisition to treatment planning and dose delivery. The audit was based on the IAEA recommendations and focused on dosimetry part of the treatment planning and delivery processes. Methods The audit was conducted in three radiotherapy departments of Serbia. An anthropomorphic phantom was scanned with a computed tomography unit (CT) and treatment plans for eight different test cases involving various beam configurations suggested by the IAEA were prepared on local treatment planning systems (TPSs). The phantom was irradiated following the treatment plans for these test cases and doses in specific points were measured with an ionization chamber. The differences between the measured and calculated doses were reported. Results The measurements were conducted for different photon beam energies and TPS calculation algorithms. The deviation between the measured and calculated values for all test cases made with advanced algorithms were within the agreement criteria, while the larger deviations were observed for simpler algorithms. The number of measurements with results outside the agreement criteria increased with the increase of the beam energy and decreased with TPS calculation algorithm sophistication. Also, a few errors in the basic dosimetry data in TPS were detected and corrected. Conclusions The audit helped the users to better understand the operational features and limitations of their TPSs and resulted in increased confidence in dose calculation accuracy using TPSs. The audit results indicated the shortcomings of simpler algorithms for the test cases performed and, therefore the transition to more advanced algorithms is highly desirable. PMID:22971539

  13. Dosimetric verification of radiotherapy treatment planning systems in Serbia: national audit.

    PubMed

    Rutonjski, Laza; Petrović, Borislava; Baucal, Milutin; Teodorović, Milan; Cudić, Ozren; Gershkevitsh, Eduard; Izewska, Joanna

    2012-09-12

    Independent external audits play an important role in quality assurance programme in radiation oncology. The audit supported by the IAEA in Serbia was designed to review the whole chain of activities in 3D conformal radiotherapy (3D-CRT) workflow, from patient data acquisition to treatment planning and dose delivery. The audit was based on the IAEA recommendations and focused on dosimetry part of the treatment planning and delivery processes. The audit was conducted in three radiotherapy departments of Serbia. An anthropomorphic phantom was scanned with a computed tomography unit (CT) and treatment plans for eight different test cases involving various beam configurations suggested by the IAEA were prepared on local treatment planning systems (TPSs). The phantom was irradiated following the treatment plans for these test cases and doses in specific points were measured with an ionization chamber. The differences between the measured and calculated doses were reported. The measurements were conducted for different photon beam energies and TPS calculation algorithms. The deviation between the measured and calculated values for all test cases made with advanced algorithms were within the agreement criteria, while the larger deviations were observed for simpler algorithms. The number of measurements with results outside the agreement criteria increased with the increase of the beam energy and decreased with TPS calculation algorithm sophistication. Also, a few errors in the basic dosimetry data in TPS were detected and corrected. The audit helped the users to better understand the operational features and limitations of their TPSs and resulted in increased confidence in dose calculation accuracy using TPSs. The audit results indicated the shortcomings of simpler algorithms for the test cases performed and, therefore the transition to more advanced algorithms is highly desirable.

  14. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation.

    PubMed

    Jia, Xun; Lou, Yifei; Li, Ruijiang; Song, William Y; Jiang, Steve B

    2010-04-01

    Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  15. SU-E-T-381: Evaluation of Calculated Dose Accuracy for Organs-At-Risk Located at Out-Of-Field in a Commercial Treatment Planning System for High Energy Photon Beams Produced From TrueBeam Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, L; Ding, G

    Purpose: Dose calculation accuracy for the out-of-field dose is important for predicting the dose to the organs-at-risk when they are located outside primary beams. The investigations on evaluating the calculation accuracy of treatment planning systems (TPS) on out-of-field dose in existing publications have focused on low energy (6MV) photon. This study evaluates out-of-field dose calculation accuracy of AAA algorithm for 15MV high energy photon beams. Methods: We used the EGSnrc Monte Carlo (MC) codes to evaluate the AAA algorithm in Varian Eclipse TPS (v.11). The incident beams start with validated Varian phase-space sources for a TrueBeam linac equipped with Millenniummore » 120 MLC. Dose comparisons between using AAA and MC for CT based realistic patient treatment plans using VMAT techniques for prostate and lung were performed and uncertainties of organ dose predicted by AAA at out-of-field location were evaluated. Results: The results show that AAA calculations under-estimate doses at the dose level of 1% (or less) of prescribed dose for CT based patient treatment plans using VMAT techniques. In regions where dose is only 1% of prescribed dose, although AAA under-estimates the out-of-field dose by 30% relative to the local dose, it is only about 0.3% of prescribed dose. For example, the uncertainties of calculated organ dose to liver or kidney that is located out-of-field is <0.3% of prescribed dose. Conclusion: For 15MV high energy photon beams, very good agreements (<1%) in calculating dose distributions were obtained between AAA and MC. The uncertainty of out-of-field dose calculations predicted by the AAA algorithm for realistic patient VMAT plans is <0.3% of prescribed dose in regions where the dose relative to the prescribed dose is <1%, although the uncertainties can be much larger relative to local doses. For organs-at-risk located at out-of-field, the error of dose predicted by Eclipse using AAA is negligible. This work was conducted in part using the resources of Varian research grant VUMC40590-R.« less

  16. Implementation of a dose gradient method into optimization of dose distribution in prostate cancer 3D-CRT plans

    PubMed Central

    Giżyńska, Marta K.; Kukołowicz, Paweł F.; Kordowski, Paweł

    2014-01-01

    Aim The aim of this work is to present a method of beam weight and wedge angle optimization for patients with prostate cancer. Background 3D-CRT is usually realized with forward planning based on a trial and error method. Several authors have published a few methods of beam weight optimization applicable to the 3D-CRT. Still, none on these methods is in common use. Materials and methods Optimization is based on the assumption that the best plan is achieved if dose gradient at ICRU point is equal to zero. Our optimization algorithm requires beam quality index, depth of maximum dose, profiles of wedged fields and maximum dose to femoral heads. The method was tested for 10 patients with prostate cancer, treated with the 3-field technique. Optimized plans were compared with plans prepared by 12 experienced planners. Dose standard deviation in target volume, and minimum and maximum doses were analyzed. Results The quality of plans obtained with the proposed optimization algorithms was comparable to that prepared by experienced planners. Mean difference in target dose standard deviation was 0.1% in favor of the plans prepared by planners for optimization of beam weights and wedge angles. Introducing a correction factor for patient body outline for dose gradient at ICRU point improved dose distribution homogeneity. On average, a 0.1% lower standard deviation was achieved with the optimization algorithm. No significant difference in mean dose–volume histogram for the rectum was observed. Conclusions Optimization shortens very much time planning. The average planning time was 5 min and less than a minute for forward and computer optimization, respectively. PMID:25337411

  17. Optimization for high-dose-rate brachytherapy of cervical cancer with adaptive simulated annealing and gradient descent.

    PubMed

    Yao, Rui; Templeton, Alistair K; Liao, Yixiang; Turian, Julius V; Kiel, Krystyna D; Chu, James C H

    2014-01-01

    To validate an in-house optimization program that uses adaptive simulated annealing (ASA) and gradient descent (GD) algorithms and investigate features of physical dose and generalized equivalent uniform dose (gEUD)-based objective functions in high-dose-rate (HDR) brachytherapy for cervical cancer. Eight Syed/Neblett template-based cervical cancer HDR interstitial brachytherapy cases were used for this study. Brachytherapy treatment plans were first generated using inverse planning simulated annealing (IPSA). Using the same dwell positions designated in IPSA, plans were then optimized with both physical dose and gEUD-based objective functions, using both ASA and GD algorithms. Comparisons were made between plans both qualitatively and based on dose-volume parameters, evaluating each optimization method and objective function. A hybrid objective function was also designed and implemented in the in-house program. The ASA plans are higher on bladder V75% and D2cc (p=0.034) and lower on rectum V75% and D2cc (p=0.034) than the IPSA plans. The ASA and GD plans are not significantly different. The gEUD-based plans have higher homogeneity index (p=0.034), lower overdose index (p=0.005), and lower rectum gEUD and normal tissue complication probability (p=0.005) than the physical dose-based plans. The hybrid function can produce a plan with dosimetric parameters between the physical dose-based and gEUD-based plans. The optimized plans with the same objective value and dose-volume histogram could have different dose distributions. Our optimization program based on ASA and GD algorithms is flexible on objective functions, optimization parameters, and can generate optimized plans comparable with IPSA. Copyright © 2014 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  18. Pharmacogenetic-guided dosing of coumarin anticoagulants: algorithms for warfarin, acenocoumarol and phenprocoumon

    PubMed Central

    Verhoef, Talitha I; Redekop, William K; Daly, Ann K; van Schie, Rianne M F; de Boer, Anthonius; Maitland-van der Zee, Anke-Hilse

    2014-01-01

    Coumarin derivatives, such as warfarin, acenocoumarol and phenprocoumon are frequently prescribed oral anticoagulants to treat and prevent thromboembolism. Because there is a large inter-individual and intra-individual variability in dose–response and a small therapeutic window, treatment with coumarin derivatives is challenging. Certain polymorphisms in CYP2C9 and VKORC1 are associated with lower dose requirements and a higher risk of bleeding. In this review we describe the use of different coumarin derivatives, pharmacokinetic characteristics of these drugs and differences amongst the coumarins. We also describe the current clinical challenges and the role of pharmacogenetic factors. These genetic factors are used to develop dosing algorithms and can be used to predict the right coumarin dose. The effectiveness of this new dosing strategy is currently being investigated in clinical trials. PMID:23919835

  19. Comparison of low-contrast detectability between two CT reconstruction algorithms using voxel-based 3D printed textured phantoms.

    PubMed

    Solomon, Justin; Ba, Alexandre; Bochud, François; Samei, Ehsan

    2016-12-01

    To use novel voxel-based 3D printed textured phantoms in order to compare low-contrast detectability between two reconstruction algorithms, FBP (filtered-backprojection) and SAFIRE (sinogram affirmed iterative reconstruction) and determine what impact background texture (i.e., anatomical noise) has on estimating the dose reduction potential of SAFIRE. Liver volumes were segmented from 23 abdominal CT cases. The volumes were characterized in terms of texture features from gray-level co-occurrence and run-length matrices. Using a 3D clustered lumpy background (CLB) model, a fitting technique based on a genetic optimization algorithm was used to find CLB textures that were reflective of the liver textures, accounting for CT system factors of spatial blurring and noise. With the modeled background texture as a guide, four cylindrical phantoms (Textures A-C and uniform, 165 mm in diameter, and 30 mm height) were designed, each containing 20 low-contrast spherical signals (6 mm diameter at nominal contrast levels of ∼3.2, 5.2, 7.2, 10, and 14 HU with four repeats per signal). The phantoms were voxelized and input into a commercial multimaterial 3D printer (Object Connex 350), with custom software for voxel-based printing (using principles of digital dithering). Images of the textured phantoms and a corresponding uniform phantom were acquired at six radiation dose levels (SOMATOM Flash, Siemens Healthcare) and observer model detection performance (detectability index of a multislice channelized Hotelling observer) was estimated for each condition (5 contrasts × 6 doses × 2 reconstructions × 4 backgrounds = 240 total conditions). A multivariate generalized regression analysis was performed (linear terms, no interactions, random error term, log link function) to assess whether dose, reconstruction algorithm, signal contrast, and background type have statistically significant effects on detectability. Also, fitted curves of detectability (averaged across contrast levels) as a function of dose were constructed for each reconstruction algorithm and background texture. FBP and SAFIRE were compared for each background type to determine the improvement in detectability at a given dose, and the reduced dose at which SAFIRE had equivalent performance compared to FBP at 100% dose. Detectability increased with increasing radiation dose (P = 2.7 × 10 -59 ) and contrast level (P = 2.2 × 10 -86 ) and was higher in the uniform phantom compared to the textured phantoms (P = 6.9 × 10 -51 ). Overall, SAFIRE had higher d' compared to FBP (P = 0.02). The estimated dose reduction potential of SAFIRE was found to be 8%, 10%, 27%, and 8% for Texture-A, Texture-B, Texture-C and uniform phantoms. In all background types, detectability was higher with SAFIRE compared to FBP. However, the relative improvement observed from SAFIRE was highly dependent on the complexity of the background texture. Iterative algorithms such as SAFIRE should be assessed in the most realistic context possible.

  20. A novel strategy for load balancing of distributed medical applications.

    PubMed

    Logeswaran, Rajasvaran; Chen, Li-Choo

    2012-04-01

    Current trends in medicine, specifically in the electronic handling of medical applications, ranging from digital imaging, paperless hospital administration and electronic medical records, telemedicine, to computer-aided diagnosis, creates a burden on the network. Distributed Service Architectures, such as Intelligent Network (IN), Telecommunication Information Networking Architecture (TINA) and Open Service Access (OSA), are able to meet this new challenge. Distribution enables computational tasks to be spread among multiple processors; hence, performance is an important issue. This paper proposes a novel approach in load balancing, the Random Sender Initiated Algorithm, for distribution of tasks among several nodes sharing the same computational object (CO) instances in Distributed Service Architectures. Simulations illustrate that the proposed algorithm produces better network performance than the benchmark load balancing algorithms-the Random Node Selection Algorithm and the Shortest Queue Algorithm, especially under medium and heavily loaded conditions.

  1. Fire spread estimation on forest wildfire using ensemble kalman filter

    NASA Astrophysics Data System (ADS)

    Syarifah, Wardatus; Apriliani, Erna

    2018-04-01

    Wildfire is one of the most frequent disasters in the world, for example forest wildfire, causing population of forest decrease. Forest wildfire, whether naturally occurring or prescribed, are potential risks for ecosystems and human settlements. These risks can be managed by monitoring the weather, prescribing fires to limit available fuel, and creating firebreaks. With computer simulations we can predict and explore how fires may spread. The model of fire spread on forest wildfire was established to determine the fire properties. The fire spread model is prepared based on the equation of the diffusion reaction model. There are many methods to estimate the spread of fire. The Kalman Filter Ensemble Method is a modified estimation method of the Kalman Filter algorithm that can be used to estimate linear and non-linear system models. In this research will apply Ensemble Kalman Filter (EnKF) method to estimate the spread of fire on forest wildfire. Before applying the EnKF method, the fire spread model will be discreted using finite difference method. At the end, the analysis obtained illustrated by numerical simulation using software. The simulation results show that the Ensemble Kalman Filter method is closer to the system model when the ensemble value is greater, while the covariance value of the system model and the smaller the measurement.

  2. Linear feasibility algorithms for treatment planning in interstitial photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Rendon, A.; Beck, J. C.; Lilge, Lothar

    2008-02-01

    Interstitial Photodynamic therapy (IPDT) has been under intense investigation in recent years, with multiple clinical trials underway. This effort has demanded the development of optimization strategies that determine the best locations and output powers for light sources (cylindrical or point diffusers) to achieve an optimal light delivery. Furthermore, we have recently introduced cylindrical diffusers with customizable emission profiles, placing additional requirements on the optimization algorithms, particularly in terms of the stability of the inverse problem. Here, we present a general class of linear feasibility algorithms and their properties. Moreover, we compare two particular instances of these algorithms, which are been used in the context of IPDT: the Cimmino algorithm and a weighted gradient descent (WGD) algorithm. The algorithms were compared in terms of their convergence properties, the cost function they minimize in the infeasible case, their ability to regularize the inverse problem, and the resulting optimal light dose distributions. Our results show that the WGD algorithm overall performs slightly better than the Cimmino algorithm and that it converges to a minimizer of a clinically relevant cost function in the infeasible case. Interestingly however, treatment plans resulting from either algorithms were very similar in terms of the resulting fluence maps and dose volume histograms, once the diffuser powers adjusted to achieve equal prostate coverage.

  3. SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, T; Finlay, J; Mesina, C

    Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axismore » ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium.« less

  4. A low-count reconstruction algorithm for Compton-based prompt gamma imaging

    NASA Astrophysics Data System (ADS)

    Huang, Hsuan-Ming; Liu, Chih-Chieh; Jan, Meei-Ling; Lee, Ming-Wei

    2018-04-01

    The Compton camera is an imaging device which has been proposed to detect prompt gammas (PGs) produced by proton–nuclear interactions within tissue during proton beam irradiation. Compton-based PG imaging has been developed to verify proton ranges because PG rays, particularly characteristic ones, have strong correlations with the distribution of the proton dose. However, accurate image reconstruction from characteristic PGs is challenging because the detector efficiency and resolution are generally low. Our previous study showed that point spread functions can be incorporated into the reconstruction process to improve image resolution. In this study, we proposed a low-count reconstruction algorithm to improve the image quality of a characteristic PG emission by pooling information from other characteristic PG emissions. PGs were simulated from a proton beam irradiated on a water phantom, and a two-stage Compton camera was used for PG detection. The results show that the image quality of the reconstructed characteristic PG emission is improved with our proposed method in contrast to the standard reconstruction method using events from only one characteristic PG emission. For the 4.44 MeV PG rays, both methods can be used to predict the positions of the peak and the distal falloff with a mean accuracy of 2 mm. Moreover, only the proposed method can improve the estimated positions of the peak and the distal falloff of 5.25 MeV PG rays, and a mean accuracy of 2 mm can be reached.

  5. Denoising of polychromatic CT images based on their own noise properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Ji Hye; Chang, Yongjin; Ra, Jong Beom, E-mail: jbra@kaist.ac.kr

    Purpose: Because of high diagnostic accuracy and fast scan time, computed tomography (CT) has been widely used in various clinical applications. Since the CT scan introduces radiation exposure to patients, however, dose reduction has recently been recognized as an important issue in CT imaging. However, low-dose CT causes an increase of noise in the image and thereby deteriorates the accuracy of diagnosis. In this paper, the authors develop an efficient denoising algorithm for low-dose CT images obtained using a polychromatic x-ray source. The algorithm is based on two steps: (i) estimation of space variant noise statistics, which are uniquely determinedmore » according to the system geometry and scanned object, and (ii) subsequent novel conversion of the estimated noise to Gaussian noise so that an existing high performance Gaussian noise filtering algorithm can be directly applied to CT images with non-Gaussian noise. Methods: For efficient polychromatic CT image denoising, the authors first reconstruct an image with the iterative maximum-likelihood polychromatic algorithm for CT to alleviate the beam-hardening problem. We then estimate the space-variant noise variance distribution on the image domain. Since there are many high performance denoising algorithms available for the Gaussian noise, image denoising can become much more efficient if they can be used. Hence, the authors propose a novel conversion scheme to transform the estimated space-variant noise to near Gaussian noise. In the suggested scheme, the authors first convert the image so that its mean and variance can have a linear relationship, and then produce a Gaussian image via variance stabilizing transform. The authors then apply a block matching 4D algorithm that is optimized for noise reduction of the Gaussian image, and reconvert the result to obtain a final denoised image. To examine the performance of the proposed method, an XCAT phantom simulation and a physical phantom experiment were conducted. Results: Both simulation and experimental results show that, unlike the existing denoising algorithms, the proposed algorithm can effectively reduce the noise over the whole region of CT images while preventing degradation of image resolution. Conclusions: To effectively denoise polychromatic low-dose CT images, a novel denoising algorithm is proposed. Because this algorithm is based on the noise statistics of a reconstructed polychromatic CT image, the spatially varying noise on the image is effectively reduced so that the denoised image will have homogeneous quality over the image domain. Through a simulation and a real experiment, it is verified that the proposed algorithm can deliver considerably better performance compared to the existing denoising algorithms.« less

  6. Preliminary assessment of the impact of incorporating a detailed algorithm for the effects of nuclear irradiation on combat crew performance into the Janus combat simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warshawsky, A.S.; Uzelac, M.J.; Pimper, J.E.

    The Crew III algorithm for assessing time and dose dependent combat crew performance subsequent to nuclear irradiation was incorporated into the Janus combat simulation system. Battle outcomes using this algorithm were compared to outcomes based on the currently used time-independent cookie-cutter'' assessment methodology. The results illustrate quantifiable differences in battle outcome between the two assessment techniques. Results suggest that tactical nuclear weapons are more effective than currently assumed if performance degradation attributed to radiation doses between 150 to 3000 rad are taken into account. 6 refs., 9 figs.

  7. Technical Note: A direct ray-tracing method to compute integral depth dose in pencil beam proton radiography with a multilayer ionization chamber.

    PubMed

    Farace, Paolo; Righetto, Roberto; Deffet, Sylvain; Meijers, Arturs; Vander Stappen, Francois

    2016-12-01

    To introduce a fast ray-tracing algorithm in pencil proton radiography (PR) with a multilayer ionization chamber (MLIC) for in vivo range error mapping. Pencil beam PR was obtained by delivering spots uniformly positioned in a square (45 × 45 mm 2 field-of-view) of 9 × 9 spots capable of crossing the phantoms (210 MeV). The exit beam was collected by a MLIC to sample the integral depth dose (IDD MLIC ). PRs of an electron-density and of a head phantom were acquired by moving the couch to obtain multiple 45 × 45 mm 2 frames. To map the corresponding range errors, the two-dimensional set of IDD MLIC was compared with (i) the integral depth dose computed by the treatment planning system (TPS) by both analytic (IDD TPS ) and Monte Carlo (IDD MC ) algorithms in a volume of water simulating the MLIC at the CT, and (ii) the integral depth dose directly computed by a simple ray-tracing algorithm (IDD direct ) through the same CT data. The exact spatial position of the spot pattern was numerically adjusted testing different in-plane positions and selecting the one that minimized the range differences between IDD direct and IDD MLIC . Range error mapping was feasible by both the TPS and the ray-tracing methods, but very sensitive to even small misalignments. In homogeneous regions, the range errors computed by the direct ray-tracing algorithm matched the results obtained by both the analytic and the Monte Carlo algorithms. In both phantoms, lateral heterogeneities were better modeled by the ray-tracing and the Monte Carlo algorithms than by the analytic TPS computation. Accordingly, when the pencil beam crossed lateral heterogeneities, the range errors mapped by the direct algorithm matched better the Monte Carlo maps than those obtained by the analytic algorithm. Finally, the simplicity of the ray-tracing algorithm allowed to implement a prototype procedure for automated spatial alignment. The ray-tracing algorithm can reliably replace the TPS method in MLIC PR for in vivo range verification and it can be a key component to develop software tools for spatial alignment and correction of CT calibration.

  8. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy

    NASA Astrophysics Data System (ADS)

    Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.

    2014-08-01

    The purpose of this study was to assess the possibility of introducing site-specific range margins to replace current generic margins in proton therapy. Further, the goal was to study the potential of reducing margins with current analytical dose calculations methods. For this purpose we investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo (MC) simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for seven disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head and neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and MC algorithms to obtain the average range differences and root mean square deviation for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing MC dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head and neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be needed for breast, lung and head and neck treatments. We conclude that the currently used generic range uncertainty margins in proton therapy should be redefined site specific and that complex geometries may require a field specific adjustment. Routine verifications of treatment plans using MC simulations are recommended for patients with heterogeneous geometries.

  9. In vivo measurements for high dose rate brachytherapy with optically stimulated luminescent dosimeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Renu; Jursinic, Paul A.

    2013-07-15

    Purpose: To show the feasibility of clinical implementation of OSLDs for high dose-rate (HDR) in vivo dosimetry for gynecological and breast patients. To discuss how the OSLDs were characterized for an Ir-192 source, taking into account low gamma energy and high dose gradients. To describe differences caused by the dose calculation formalism of treatment planning systems.Methods: OSLD irradiations were made using the GammaMedplus iX Ir-192 HDR, Varian Medical Systems, Milpitas, CA. BrachyVision versions 8.9 and 10.0, Varian Medical Systems, Milpitas, CA, were used for calculations. Version 8.9 used the TG-43 algorithm and version 10.0 used the Acuros algorithm. The OSLDsmore » (InLight Nanodots) were characterized for Ir-192. Various phantoms were created to assess calculated and measured doses and the angular dependence and self-absorption of the Nanodots. Following successful phantom measurements, patient measurements for gynecological patients and breast cancer patients were made and compared to calculated doses.Results: The OSLD sensitivity to Ir-192 compared to 6 MV is between 1.10 and 1.25, is unique to each detector, and changes with accumulated dose. The measured doses were compared to those predicted by the treatment planning system and found to be in agreement for the gynecological patients to within measurement uncertainty. The range of differences between the measured and Acuros calculated doses was -10%-14%. For the breast patients, there was a discrepancy of -4.4% to +6.5% between the measured and calculated doses at the skin surface when the Acuros algorithm was used. These differences were within experimental uncertainty due to (random) error in the location of the detector with respect to the treatment catheter.Conclusions: OSLDs can be successfully used for HDR in vivo dosimetry. However, for the measurements to be meaningful one must account for the angular dependence, volume-averaging, and the greater sensitivity to Ir-192 gamma rays than to 6 MV x-rays if 6 MV x-rays were used for OSLD calibration. The limitations of the treatment planning algorithm must be understood, especially for surface dose measurements. Use of in vivo dosimetry for HDR brachytherapy treatments is feasible and has the potential to detect and prevent gross errors. In vivo HDR brachytherapy should be included as part of the QA for a HDR brachytherapy program.« less

  10. Theory of rumour spreading in complex social networks

    NASA Astrophysics Data System (ADS)

    Nekovee, M.; Moreno, Y.; Bianconi, G.; Marsili, M.

    2007-01-01

    We introduce a general stochastic model for the spread of rumours, and derive mean-field equations that describe the dynamics of the model on complex social networks (in particular, those mediated by the Internet). We use analytical and numerical solutions of these equations to examine the threshold behaviour and dynamics of the model on several models of such networks: random graphs, uncorrelated scale-free networks and scale-free networks with assortative degree correlations. We show that in both homogeneous networks and random graphs the model exhibits a critical threshold in the rumour spreading rate below which a rumour cannot propagate in the system. In the case of scale-free networks, on the other hand, this threshold becomes vanishingly small in the limit of infinite system size. We find that the initial rate at which a rumour spreads is much higher in scale-free networks than in random graphs, and that the rate at which the spreading proceeds on scale-free networks is further increased when assortative degree correlations are introduced. The impact of degree correlations on the final fraction of nodes that ever hears a rumour, however, depends on the interplay between network topology and the rumour spreading rate. Our results show that scale-free social networks are prone to the spreading of rumours, just as they are to the spreading of infections. They are relevant to the spreading dynamics of chain emails, viral advertising and large-scale information dissemination algorithms on the Internet.

  11. Optimizing spread dynamics on graphs by message passing

    NASA Astrophysics Data System (ADS)

    Altarelli, F.; Braunstein, A.; Dall'Asta, L.; Zecchina, R.

    2013-09-01

    Cascade processes are responsible for many important phenomena in natural and social sciences. Simple models of irreversible dynamics on graphs, in which nodes activate depending on the state of their neighbors, have been successfully applied to describe cascades in a large variety of contexts. Over the past decades, much effort has been devoted to understanding the typical behavior of the cascades arising from initial conditions extracted at random from some given ensemble. However, the problem of optimizing the trajectory of the system, i.e. of identifying appropriate initial conditions to maximize (or minimize) the final number of active nodes, is still considered to be practically intractable, with the only exception being models that satisfy a sort of diminishing returns property called submodularity. Submodular models can be approximately solved by means of greedy strategies, but by definition they lack cooperative characteristics which are fundamental in many real systems. Here we introduce an efficient algorithm based on statistical physics for the optimization of trajectories in cascade processes on graphs. We show that for a wide class of irreversible dynamics, even in the absence of submodularity, the spread optimization problem can be solved efficiently on large networks. Analytic and algorithmic results on random graphs are complemented by the solution of the spread maximization problem on a real-world network (the Epinions consumer reviews network).

  12. #FluxFlow: Visual Analysis of Anomalous Information Spreading on Social Media.

    PubMed

    Zhao, Jian; Cao, Nan; Wen, Zhen; Song, Yale; Lin, Yu-Ru; Collins, Christopher

    2014-12-01

    We present FluxFlow, an interactive visual analysis system for revealing and analyzing anomalous information spreading in social media. Everyday, millions of messages are created, commented, and shared by people on social media websites, such as Twitter and Facebook. This provides valuable data for researchers and practitioners in many application domains, such as marketing, to inform decision-making. Distilling valuable social signals from the huge crowd's messages, however, is challenging, due to the heterogeneous and dynamic crowd behaviors. The challenge is rooted in data analysts' capability of discerning the anomalous information behaviors, such as the spreading of rumors or misinformation, from the rest that are more conventional patterns, such as popular topics and newsworthy events, in a timely fashion. FluxFlow incorporates advanced machine learning algorithms to detect anomalies, and offers a set of novel visualization designs for presenting the detected threads for deeper analysis. We evaluated FluxFlow with real datasets containing the Twitter feeds captured during significant events such as Hurricane Sandy. Through quantitative measurements of the algorithmic performance and qualitative interviews with domain experts, the results show that the back-end anomaly detection model is effective in identifying anomalous retweeting threads, and its front-end interactive visualizations are intuitive and useful for analysts to discover insights in data and comprehend the underlying analytical model.

  13. SU-F-J-124: Reduction in Dosimetric Impact of Motion Using VMAT Compared to IMRT in Hypofractionated Prostate Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ravindranath, B; Xiong, J; Happersett, L

    2016-06-15

    Purpose: To quantify and compare the dosimetric impact of motion management correction strategies during VMAT and IMRT for hypofractionated prostate treatment. Methods: Two arc VMAT and 9 field IMRT plans were generated for two prostate cancer patients undergoing hypofractionated radiotherapy (7.5Gy × 5 and 8Gy × 5). 212 motion traces were retrospectively extracted from treatment records of prostate cancer patients with implanted Calypso beacons. Dose to the CTV and normal tissues was reconstructed for each trace and plan taking into account the actual treatment delivery time. Following motion correction scenarios were simulated: (1) VMAT plan – (a) No correction, (b)more » correction between arcs, (c) correction every 20 degrees of gantry rotation and (2) IMRT plan - (a) No correction,(b) correction between fields. Two mm action threshold for position correction was assumed. The 5–95% confidence interval (CI) range was extracted from the family of DVHs for each correction scenario. Results: Treatment duration for 8Gy plan (VMAT vs IMRT) was 3 vs 12 mins and for 7.5Gy plan was 3 vs 9 mins. In the absence of correction, the VMAT 5–−95% CI dose spread was, on average, less than the IMRT dose spread by 2% for CTVD95, 9% for rectalwall (RW) D1cc and 9% for bladderwall (BW) D53. Further, VMAT b/w arcs correction strategy reduced the spread about the planned value compared to IMRT b/w fields correction by: 1% for CTVD95, 2.6% for RW1cc and 2% for BWD53. VMAT 20 degree strategy led to greater reduction in dose spread compared to IMRT by: 2% for CTVD95, 4.5% for RW1cc and 6.7% for BWD53. Conclusion: In the absence of a correction strategy, the limited motion during VMAT’s shorter delivery times translates into less motion-induced dosimetric degradation than IMRT. Performing limited periodic motion correction during VMAT can yield excellent conformity to planned values that is superior to IMRT. This work was partially supported by Varian Medical Systems.« less

  14. Quantitative assessment of the accuracy of dose calculation using pencil beam and Monte Carlo algorithms and requirements for clinical quality assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, Imad, E-mail: iali@ouhsc.edu; Ahmad, Salahuddin

    2013-10-01

    To compare the doses calculated using the BrainLAB pencil beam (PB) and Monte Carlo (MC) algorithms for tumors located in various sites including the lung and evaluate quality assurance procedures required for the verification of the accuracy of dose calculation. The dose-calculation accuracy of PB and MC was also assessed quantitatively with measurement using ionization chamber and Gafchromic films placed in solid water and heterogeneous phantoms. The dose was calculated using PB convolution and MC algorithms in the iPlan treatment planning system from BrainLAB. The dose calculation was performed on the patient's computed tomography images with lesions in various treatmentmore » sites including 5 lungs, 5 prostates, 4 brains, 2 head and necks, and 2 paraspinal tissues. A combination of conventional, conformal, and intensity-modulated radiation therapy plans was used in dose calculation. The leaf sequence from intensity-modulated radiation therapy plans or beam shapes from conformal plans and monitor units and other planning parameters calculated by the PB were identical for calculating dose with MC. Heterogeneity correction was considered in both PB and MC dose calculations. Dose-volume parameters such as V95 (volume covered by 95% of prescription dose), dose distributions, and gamma analysis were used to evaluate the calculated dose by PB and MC. The measured doses by ionization chamber and EBT GAFCHROMIC film in solid water and heterogeneous phantoms were used to quantitatively asses the accuracy of dose calculated by PB and MC. The dose-volume histograms and dose distributions calculated by PB and MC in the brain, prostate, paraspinal, and head and neck were in good agreement with one another (within 5%) and provided acceptable planning target volume coverage. However, dose distributions of the patients with lung cancer had large discrepancies. For a plan optimized with PB, the dose coverage was shown as clinically acceptable, whereas in reality, the MC showed a systematic lack of dose coverage. The dose calculated by PB for lung tumors was overestimated by up to 40%. An interesting feature that was observed is that despite large discrepancies in dose-volume histogram coverage of the planning target volume between PB and MC, the point doses at the isocenter (center of the lesions) calculated by both algorithms were within 7% even for lung cases. The dose distributions measured with EBT GAFCHROMIC films in heterogeneous phantoms showed large discrepancies of nearly 15% lower than PB at interfaces between heterogeneous media, where these lower doses measured by the film were in agreement with those by MC. The doses (V95) calculated by MC and PB agreed within 5% for treatment sites with small tissue heterogeneities such as the prostate, brain, head and neck, and paraspinal tumors. Considerable discrepancies, up to 40%, were observed in the dose-volume coverage between MC and PB in lung tumors, which may affect clinical outcomes. The discrepancies between MC and PB increased for 15 MV compared with 6 MV indicating the importance of implementation of accurate clinical treatment planning such as MC. The comparison of point doses is not representative of the discrepancies in dose coverage and might be misleading in evaluating the accuracy of dose calculation between PB and MC. Thus, the clinical quality assurance procedures required to verify the accuracy of dose calculation using PB and MC need to consider measurements of 2- and 3-dimensional dose distributions rather than a single point measurement using heterogeneous phantoms instead of homogenous water-equivalent phantoms.« less

  15. Beyond Gaussians: a study of single spot modeling for scanning proton dose calculation

    PubMed Central

    Li, Yupeng; Zhu, Ronald X.; Sahoo, Narayan; Anand, Aman; Zhang, Xiaodong

    2013-01-01

    Active spot scanning proton therapy is becoming increasingly adopted by proton therapy centers worldwide. Unlike passive-scattering proton therapy, active spot scanning proton therapy, especially intensity-modulated proton therapy, requires proper modeling of each scanning spot to ensure accurate computation of the total dose distribution contributed from a large number of spots. During commissioning of the spot scanning gantry at the Proton Therapy Center in Houston, it was observed that the long-range scattering protons in a medium may have been inadequately modeled for high-energy beams by a commercial treatment planning system, which could lead to incorrect prediction of field-size effects on dose output. In the present study, we developed a pencil-beam algorithm for scanning-proton dose calculation by focusing on properly modeling individual scanning spots. All modeling parameters required by the pencil-beam algorithm can be generated based solely on a few sets of measured data. We demonstrated that low-dose halos in single-spot profiles in the medium could be adequately modeled with the addition of a modified Cauchy-Lorentz distribution function to a double-Gaussian function. The field-size effects were accurately computed at all depths and field sizes for all energies, and good dose accuracy was also achieved for patient dose verification. The implementation of the proposed pencil beam algorithm also enabled us to study the importance of different modeling components and parameters at various beam energies. The results of this study may be helpful in improving dose calculation accuracy and simplifying beam commissioning and treatment planning processes for spot scanning proton therapy. PMID:22297324

  16. An algorithm for treatment of patients with hypersensitivity reactions after vaccines.

    PubMed

    Wood, Robert A; Berger, Melvin; Dreskin, Stephen C; Setse, Rosanna; Engler, Renata J M; Dekker, Cornelia L; Halsey, Neal A

    2008-09-01

    Concerns about possible allergic reactions to immunizations are raised frequently by both patients/parents and primary care providers. Estimates of true allergic, or immediate hypersensitivity, reactions to routine vaccines range from 1 per 50000 doses for diphtheria-tetanus-pertussis to approximately 1 per 500000 to 1000000 doses for most other vaccines. In a large study from New Zealand, data were collected during a 5-year period on 15 marketed vaccines and revealed an estimated rate of 1 immediate hypersensitivity reaction per 450000 doses of vaccine administered. Another large study, conducted within the Vaccine Safety Datalink, described a range of reaction rates to >7.5 million doses. Depending on the study design and the time after the immunization event, reaction rates varied from 0.65 cases per million doses to 1.53 cases per million doses when additional allergy codes were included. For some vaccines, particularly when allergens such as gelatin are part of the formulation (eg, Japanese encephalitis), higher rates of serious allergic reactions may occur. Although these per-dose estimates suggest that true hypersensitivity reactions are quite rare, the large number of doses that are administered, especially for the commonly used vaccines, makes this a relatively common clinical problem. In this review, we present background information on vaccine hypersensitivity, followed by a detailed algorithm that provides a rational and organized approach for the evaluation and treatment of patients with suspected hypersensitivity. We then include 3 cases of suspected allergic reactions to vaccines that have been referred to the Clinical Immunization Safety Assessment network to demonstrate the practical application of the algorithm.

  17. SU-E-T-800: Verification of Acurose XB Dose Calculation Algorithm at Air Cavity-Tissue Interface Using Film Measurement for Small Fields of 6-MV Flattening Filter-Free Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, S; Suh, T; Chung, J

    2015-06-15

    Purpose: To verify the dose accuracy of Acuros XB (AXB) dose calculation algorithm at air-tissue interface using inhomogeneous phantom for 6-MV flattening filter-free (FFF) beams. Methods: An inhomogeneous phantom included air cavity was manufactured for verifying dose accuracy at the air-tissue interface. The phantom was composed with 1 and 3 cm thickness of air cavity. To evaluate the central axis doses (CAD) and dose profiles of the interface, the dose calculations were performed for 3 × 3 and 4 × 4 cm{sup 2} fields of 6 MV FFF beams with AAA and AXB in Eclipse treatment plainning system. Measurements inmore » this region were performed with Gafchromic film. The root mean square errors (RMSE) were analyzed with calculated and measured dose profile. Dose profiles were divided into inner-dose profile (>80%) and penumbra (20% to 80%) region for evaluating RMSE. To quantify the distribution difference, gamma evaluation was used and determined the agreement with 3%/3mm criteria. Results: The percentage differences (%Diffs) between measured and calculated CAD in the interface, AXB shows more agreement than AAA. The %Diffs were increased with increasing the thickness of air cavity size and it is similar for both algorithms. In RMSEs of inner-profile, AXB was more accurate than AAA. The difference was up to 6 times due to overestimation by AAA. RMSEs of penumbra appeared to high difference for increasing the measurement depth. Gamma agreement also presented that the passing rates decreased in penumbra. Conclusion: This study demonstrated that the dose calculation with AXB shows more accurate than with AAA for the air-tissue interface. The 2D dose distributions with AXB for both inner-profile and penumbra showed better agreement than with AAA relative to variation of the measurement depths and air cavity sizes.« less

  18. Characterisation of a MOSFET-based detector for dose measurement under megavoltage electron beam radiotherapy

    NASA Astrophysics Data System (ADS)

    Jong, W. L.; Ung, N. M.; Tiong, A. H. L.; Rosenfeld, A. B.; Wong, J. H. D.

    2018-03-01

    The aim of this study is to investigate the fundamental dosimetric characteristics of the MOSkin detector for megavoltage electron beam dosimetry. The reproducibility, linearity, energy dependence, dose rate dependence, depth dose measurement, output factor measurement, and surface dose measurement under megavoltage electron beam were tested. The MOSkin detector showed excellent reproducibility (>98%) and linearity (R2= 1.00) up to 2000 cGy for 4-20 MeV electron beams. The MOSkin detector also showed minimal dose rate dependence (within ±3%) and energy dependence (within ±2%) over the clinical range of electron beams, except for an energy dependence at 4 MeV electron beam. An energy dependence correction factor of 1.075 is needed when the MOSkin detector is used for 4 MeV electron beam. The output factors measured by the MOSkin detector were within ±2% compared to those measured with the EBT3 film and CC13 chamber. The measured depth doses using the MOSkin detector agreed with those measured using the CC13 chamber, except at the build-up region due to the dose volume averaging effect of the CC13 chamber. For surface dose measurements, MOSkin measurements were in agreement within ±3% to those measured using EBT3 film. Measurements using the MOSkin detector were also compared to electron dose calculation algorithms namely the GGPB and eMC algorithms. Both algorithms were in agreement with measurements to within ±2% and ±4% for output factor (except for the 4 × 4 cm2 field size) and surface dose, respectively. With the uncertainties taken into account, the MOSkin detector was found to be a suitable detector for dose measurement under megavoltage electron beam. This has been demonstrated in the in vivo skin dose measurement on patients during electron boost to the breast tumour bed.

  19. Intensity-modulated radiotherapy for locally advanced non-small-cell lung cancer: a dose-escalation planning study.

    PubMed

    Lievens, Yolande; Nulens, An; Gaber, Mousa Amr; Defraene, Gilles; De Wever, Walter; Stroobants, Sigrid; Van den Heuvel, Frank

    2011-05-01

    To evaluate the potential for dose escalation with intensity-modulated radiotherapy (IMRT) in positron emission tomography-based radiotherapy planning for locally advanced non-small-cell lung cancer (LA-NSCLC). For 35 LA-NSCLC patients, three-dimensional conformal radiotherapy and IMRT plans were made to a prescription dose (PD) of 66 Gy in 2-Gy fractions. Dose escalation was performed toward the maximal PD using secondary endpoint constraints for the lung, spinal cord, and heart, with de-escalation according to defined esophageal tolerance. Dose calculation was performed using the Eclipse pencil beam algorithm, and all plans were recalculated using a collapsed cone algorithm. The normal tissue complication probabilities were calculated for the lung (Grade 2 pneumonitis) and esophagus (acute toxicity, grade 2 or greater, and late toxicity). IMRT resulted in statistically significant decreases in the mean lung (p <.0001) and maximal spinal cord (p = .002 and 0005) doses, allowing an average increase in the PD of 8.6-14.2 Gy (p ≤.0001). This advantage was lost after de-escalation within the defined esophageal dose limits. The lung normal tissue complication probabilities were significantly lower for IMRT (p <.0001), even after dose escalation. For esophageal toxicity, IMRT significantly decreased the acute NTCP values at the low dose levels (p = .0009 and p <.0001). After maximal dose escalation, late esophageal tolerance became critical (p <.0001), especially when using IMRT, owing to the parallel increases in the esophageal dose and PD. In LA-NSCLC, IMRT offers the potential to significantly escalate the PD, dependent on the lung and spinal cord tolerance. However, parallel increases in the esophageal dose abolished the advantage, even when using collapsed cone algorithms. This is important to consider in the context of concomitant chemoradiotherapy schedules using IMRT. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. [New calculation algorithms in brachytherapy for iridium 192 treatments].

    PubMed

    Robert, C; Dumas, I; Martinetti, F; Chargari, C; Haie-Meder, C; Lefkopoulos, D

    2018-05-18

    Since 1995, the brachytherapy dosimetry protocols follow the methodology recommended by the Task Group 43. This methodology, which has the advantage of being fast, is based on several approximations that are not always valid in clinical conditions. Model-based dose calculation algorithms have recently emerged in treatment planning stations and are considered as a major evolution by allowing for consideration of the patient's finite dimensions, tissue heterogeneities and the presence of high atomic number materials in applicators. In 2012, a report from the American Association of Physicists in Medicine Radiation Therapy Task Group 186 reviews these models and makes recommendations for their clinical implementation. This review focuses on the use of model-based dose calculation algorithms in the context of iridium 192 treatments. After a description of these algorithms and their clinical implementation, a summary of the main questions raised by these new methods is performed. Considerations regarding the choice of the medium used for the dose specification and the recommended methodology for assigning materials characteristics are especially described. In the last part, recent concrete examples from the literature illustrate the capabilities of these new algorithms on clinical cases. Copyright © 2018 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  1. Estimation of parameters of dose volume models and their confidence limits

    NASA Astrophysics Data System (ADS)

    van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.

    2003-07-01

    Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.

  2. The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.

    PubMed

    Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P

    1999-10-01

    In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.

  3. Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares.

    PubMed

    Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian

    2016-06-18

    In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.

  4. Inverse determination of the penalty parameter in penalized weighted least-squares algorithm for noise reduction of low-dose CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jing; Guan, Huaiqun; Solberg, Timothy

    2011-07-15

    Purpose: A statistical projection restoration algorithm based on the penalized weighted least-squares (PWLS) criterion can substantially improve the image quality of low-dose CBCT images. The performance of PWLS is largely dependent on the choice of the penalty parameter. Previously, the penalty parameter was chosen empirically by trial and error. In this work, the authors developed an inverse technique to calculate the penalty parameter in PWLS for noise suppression of low-dose CBCT in image guided radiotherapy (IGRT). Methods: In IGRT, a daily CBCT is acquired for the same patient during a treatment course. In this work, the authors acquired the CBCTmore » with a high-mAs protocol for the first session and then a lower mAs protocol for the subsequent sessions. The high-mAs projections served as the goal (ideal) toward, which the low-mAs projections were to be smoothed by minimizing the PWLS objective function. The penalty parameter was determined through an inverse calculation of the derivative of the objective function incorporating both the high and low-mAs projections. Then the parameter obtained can be used for PWLS to smooth the noise in low-dose projections. CBCT projections for a CatPhan 600 and an anthropomorphic head phantom, as well as for a brain patient, were used to evaluate the performance of the proposed technique. Results: The penalty parameter in PWLS was obtained for each CBCT projection using the proposed strategy. The noise in the low-dose CBCT images reconstructed from the smoothed projections was greatly suppressed. Image quality in PWLS-processed low-dose CBCT was comparable to its corresponding high-dose CBCT. Conclusions: A technique was proposed to estimate the penalty parameter for PWLS algorithm. It provides an objective and efficient way to obtain the penalty parameter for image restoration algorithms that require predefined smoothing parameters.« less

  5. Caffeine dosing strategies to optimize alertness during sleep loss.

    PubMed

    Vital-Lopez, Francisco G; Ramakrishnan, Sridhar; Doty, Tracy J; Balkin, Thomas J; Reifman, Jaques

    2018-05-28

    Sleep loss, which affects about one-third of the US population, can severely impair physical and neurobehavioural performance. Although caffeine, the most widely used stimulant in the world, can mitigate these effects, currently there are no tools to guide the timing and amount of caffeine consumption to optimize its benefits. In this work, we provide an optimization algorithm, suited for mobile computing platforms, to determine when and how much caffeine to consume, so as to safely maximize neurobehavioural performance at the desired time of the day, under any sleep-loss condition. The algorithm is based on our previously validated Unified Model of Performance, which predicts the effect of caffeine consumption on a psychomotor vigilance task. We assessed the algorithm by comparing the caffeine-dosing strategies (timing and amount) it identified with the dosing strategies used in four experimental studies, involving total and partial sleep loss. Through computer simulations, we showed that the algorithm yielded caffeine-dosing strategies that enhanced performance of the predicted psychomotor vigilance task by up to 64% while using the same total amount of caffeine as in the original studies. In addition, the algorithm identified strategies that resulted in equivalent performance to that in the experimental studies while reducing caffeine consumption by up to 65%. Our work provides the first quantitative caffeine optimization tool for designing effective strategies to maximize neurobehavioural performance and to avoid excessive caffeine consumption during any arbitrary sleep-loss condition. © 2018 The Authors. Journal of Sleep Research published by John Wiley & Sons Ltd on behalf of European Sleep Research Society.

  6. Continuous intensity map optimization (CIMO): A novel approach to leaf sequencing in step and shoot IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao Daliang; Earl, Matthew A.; Luan, Shuang

    2006-04-15

    A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases weremore » selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle{sup 3} treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.« less

  7. SU-F-BRCD-09: Total Variation (TV) Based Fast Convergent Iterative CBCT Reconstruction with GPU Acceleration.

    PubMed

    Xu, Q; Yang, D; Tan, J; Anastasio, M

    2012-06-01

    To improve image quality and reduce imaging dose in CBCT for radiation therapy applications and to realize near real-time image reconstruction based on use of a fast convergence iterative algorithm and acceleration by multi-GPUs. An iterative image reconstruction that sought to minimize a weighted least squares cost function that employed total variation (TV) regularization was employed to mitigate projection data incompleteness and noise. To achieve rapid 3D image reconstruction (< 1 min), a highly optimized multiple-GPU implementation of the algorithm was developed. The convergence rate and reconstruction accuracy were evaluated using a modified 3D Shepp-Logan digital phantom and a Catphan-600 physical phantom. The reconstructed images were compared with the clinical FDK reconstruction results. Digital phantom studies showed that only 15 iterations and 60 iterations are needed to achieve algorithm convergence for 360-view and 60-view cases, respectively. The RMSE was reduced to 10-4 and 10-2, respectively, by using 15 iterations for each case. Our algorithm required 5.4s to complete one iteration for the 60-view case using one Tesla C2075 GPU. The few-view study indicated that our iterative algorithm has great potential to reduce the imaging dose and preserve good image quality. For the physical Catphan studies, the images obtained from the iterative algorithm possessed better spatial resolution and higher SNRs than those obtained from by use of a clinical FDK reconstruction algorithm. We have developed a fast convergence iterative algorithm for CBCT image reconstruction. The developed algorithm yielded images with better spatial resolution and higher SNR than those produced by a commercial FDK tool. In addition, from the few-view study, the iterative algorithm has shown great potential for significantly reducing imaging dose. We expect that the developed reconstruction approach will facilitate applications including IGART and patient daily CBCT-based treatment localization. © 2012 American Association of Physicists in Medicine.

  8. Dosimetric validation of the Acuros XB Advanced Dose Calculation algorithm: fundamental characterization in water

    NASA Astrophysics Data System (ADS)

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Mancosu, Pietro; Cozzi, Luca

    2011-05-01

    This corrigendum intends to clarify some important points that were not clearly or properly addressed in the original paper, and for which the authors apologize. The original description of the first Acuros algorithm is from the developers, published in Physics in Medicine and Biology by Vassiliev et al (2010) in the paper entitled 'Validation of a new grid-based Boltzmann equation solver for dose calculation in radiotherapy with photon beams'. The main equations describing the algorithm reported in our paper, implemented as the 'Acuros XB Advanced Dose Calculation Algorithm' in the Varian Eclipse treatment planning system, were originally described (for the original Acuros algorithm) in the above mentioned paper by Vassiliev et al. The intention of our description in our paper was to give readers an overview of the algorithm, not pretending to have authorship of the algorithm itself (used as implemented in the planning system). Unfortunately our paper was not clear, particularly in not allocating full credit to the work published by Vassiliev et al on the original Acuros algorithm. Moreover, it is important to clarify that we have not adapted any existing algorithm, but have used the Acuros XB implementation in the Eclipse planning system from Varian. In particular, the original text of our paper should have been as follows: On page 1880 the sentence 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008, 2010). Acuros XB builds upon many of the methods in Attila, but represents a ground-up rewrite of the solver where the methods were adapted especially for external photon beam dose calculations' should be corrected to 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008). A new algorithm called Acuros, developed by the Transpire Inc. group, was built upon many of the methods in Attila, but represents a ground-up rewrite of the solver where the methods were especially adapted for external photon beam dose calculations, and described in Vassiliev et al (2010). Acuros XB is the Varian implementation of the original Acuros algorithm in the Eclipse planning system'. On page 1881, the sentence 'Monte Carlo and explicit LBTE solution, with sufficient refinement, will converge on the same solution. However, both methods produce errors (inaccuracies). In explicit LBTE solution methods, errors are primarily systematic, and result from discretization of the solution variables in space, angle, and energy. In both Monte Carlo and explicit LBTE solvers, a trade-off exists between speed and accuracy: reduced computational time may be achieved when less stringent accuracy criteria are specified, and vice versa' should cite the reference Vassiliev et al (2010). On page 1882, the beginning of the sub-paragraph The radiation transport model should start with 'The following description of the Acuros XB algorithm is as outlined by Vassiliev et al (2010) and reports the main steps of the radiation transport model as implemented in Eclipse'. The authors apologize for this lack of clarity in our published paper, and trust that this corrigendum gives full credit to Vassiliev et al in their earlier paper, with respect to previous work on the Acuros algorithm. However we wish to note that the entire contents of the data and results published in our paper are original and the work of the listed authors. References Gifford K A, Horton J L Jr, Wareing T A, Failla G and Mourtada F 2006 Comparison of a finite-element multigroup discrete-ordinates code with Monte Carlo for radiotherapy calculations Phys. Med. Biol. 51 2253-65 Vassiliev O N, Wareing T A, Davis I M, McGhee J, Barnett D, Horton J L, Gifford K, Failla G, Titt U and Mourtada F 2008 Feasibility of a multigroup deterministic solution method for three-dimensional radiotherapy dose calculations Int. J. Radiat. Oncol. Biol. Phys. 72 220-7 Vassiliev O N, Wareing T A, McGhee J, Failla G, Salehpour M R and Mourtada F 2010 Validation of a new grid based Boltzmann equation solver for dose calculation in radiotherapy with photon beams Phys. Med. Biol. 55 581-98 Wareing T A, McGhee J M, Morel J E and Pautz S D 2001 Discontinuous finite element Sn methods on three-dimensional unstructured grids Nucl. Sci. Eng. 138 256-68

  9. Performance Characteristics of an Independent Dose Verification Program for Helical Tomotherapy

    PubMed Central

    Chang, Isaac C. F.; Chen, Jeff; Yartsev, Slav

    2017-01-01

    Helical tomotherapy with its advanced method of intensity-modulated radiation therapy delivery has been used clinically for over 20 years. The standard delivery quality assurance procedure to measure the accuracy of delivered radiation dose from each treatment plan to a phantom is time-consuming. RadCalc®, a radiotherapy dose verification software, has released specifically for beta testing a module for tomotherapy plan dose calculations. RadCalc®'s accuracy for tomotherapy dose calculations was evaluated through examination of point doses in ten lung and ten prostate clinical plans. Doses calculated by the TomoHDA™ tomotherapy treatment planning system were used as the baseline. For lung cases, RadCalc® overestimated point doses in the lung by an average of 13%. Doses within the spinal cord and esophagus were overestimated by 10%. Prostate plans showed better agreement, with overestimations of 6% in the prostate, bladder, and rectum. The systematic overestimation likely resulted from limitations of the pencil beam dose calculation algorithm implemented by RadCalc®. Limitations were more severe in areas of greater inhomogeneity and less prominent in regions of homogeneity with densities closer to 1 g/cm3. Recommendations for RadCalc® dose calculation algorithms and anatomical representation were provided based on the results of the study. PMID:28974862

  10. Use of a channelized Hotelling observer to assess CT image quality and optimize dose reduction for iteratively reconstructed images.

    PubMed

    Favazza, Christopher P; Ferrero, Andrea; Yu, Lifeng; Leng, Shuai; McMillan, Kyle L; McCollough, Cynthia H

    2017-07-01

    The use of iterative reconstruction (IR) algorithms in CT generally decreases image noise and enables dose reduction. However, the amount of dose reduction possible using IR without sacrificing diagnostic performance is difficult to assess with conventional image quality metrics. Through this investigation, achievable dose reduction using a commercially available IR algorithm without loss of low contrast spatial resolution was determined with a channelized Hotelling observer (CHO) model and used to optimize a clinical abdomen/pelvis exam protocol. A phantom containing 21 low contrast disks-three different contrast levels and seven different diameters-was imaged at different dose levels. Images were created with filtered backprojection (FBP) and IR. The CHO was tasked with detecting the low contrast disks. CHO performance indicated dose could be reduced by 22% to 25% without compromising low contrast detectability (as compared to full-dose FBP images) whereas 50% or more dose reduction significantly reduced detection performance. Importantly, default settings for the scanner and protocol investigated reduced dose by upward of 75%. Subsequently, CHO-based protocol changes to the default protocol yielded images of higher quality and doses more consistent with values from a larger, dose-optimized scanner fleet. CHO assessment provided objective data to successfully optimize a clinical CT acquisition protocol.

  11. Multi-limit unsymmetrical MLIBD image restoration algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Cheng, Yiping; Chen, Zai-wang; Bo, Chen

    2012-11-01

    A novel multi-limit unsymmetrical iterative blind deconvolution(MLIBD) algorithm was presented to enhance the performance of adaptive optics image restoration.The algorithm enhances the reliability of iterative blind deconvolution by introducing the bandwidth limit into the frequency domain of point spread(PSF),and adopts the PSF dynamic support region estimation to improve the convergence speed.The unsymmetrical factor is automatically computed to advance its adaptivity.Image deconvolution comparing experiments between Richardson-Lucy IBD and MLIBD were done,and the result indicates that the iteration number is reduced by 22.4% and the peak signal-to-noise ratio is improved by 10.18dB with MLIBD method. The performance of MLIBD algorithm is outstanding in the images restoration the FK5-857 adaptive optics and the double-star adaptive optics.

  12. [The spread of the wild Poliovirus in the rural environment, the case of the Adzopé health district, Côte d'Ivoire].

    PubMed

    Akoua-Koffi, C G; Nekouressi, G; Tieoulou, L; Guillot, S; Faye-Kette, H; Ehouman, A

    2004-05-01

    Wild Poliovirus spreading in rural environment in Adzopé, Côte d'Ivoire In order to determine the level of wild Poliovirus spreading among rural children in an endemic poliomyelitis country 469 stools samples, from children aged between three weeks and twelve years old were processed according to WHO procedures for transportation, conservation, isolation and identification of Poliovirus. Intratypic differenciation was performed by an antigenic method using monoclonal antibodies and a genomic RFLP (Restriction Fragment Length Polymorphism). 50 Poliovirus strains (10.7%) were isolated and analyzed: 15 vaccine-like Poliovirus type 1 (30%), 30 vaccine-like Poliovirus type 2 (60%), 4 vaccine-like Poliovirus type 3 (8%) and one wild Poliovirus type 3 (2%). As expected, in the major cases the duration of post-vaccinal viral excretion did not exceed two months. However, in 14% of cases, it varied between 3 and 9 months after the third OPV dose. This long excretion could be due to an inefficient local intestinal immunity or no local immunity at all, in spite of the three OPV doses. These results argue in favor of an increase of the number of OPV doses in such endemic zones. Moreover, OPV strains are well-known to revert to pathogenicity in vaccinees, therefore, the long term excretion of pathogenic OPV derived strains by a certain amount of vaccinees needs to be considered quite seriously.

  13. Outcomes using exhaled nitric oxide measurements as an adjunct to primary care asthma management.

    PubMed

    Hewitt, Richard S; Modrich, Catherine M; Cowan, Jan O; Herbison, G Peter; Taylor, D Robin

    2009-12-01

    Exhaled nitric oxide (FENO) measurements may help to highlight when inhaled corticosteroid (ICS) therapy should or should not be adjusted in asthma. This is often difficult to judge. Our aim was to evaluate a decision-support algorithm incorporating FENO measurements in a nurse-led asthma clinic. Asthma management was guided by an algorithm based on high (>45ppb), intermediate (30-45ppb), or low (<30ppb) FENO levels and asthma control status. This provided for one of eight possible treatment options, including diagnosis review and ICS dose adjustment. Well controlled asthma increased from 41% at visit 1 to 68% at visit 5 (p=0.001). The mean fluticasone dose decreased from 312 mcg/day at visit 2 to 211mcg/day at visit 5 (p=0.022). There was a high level of protocol deviations (25%), often related to concerns about reducing the ICS dose. The % fall in FENO associated with a change in asthma status from poor control to good control was 35%. An FENO-based algorithm provided for a reduction in ICS doses without compromising asthma control. However, the results may have been influenced by the education and support which patients received. Reluctance to reduce ICS dose was an issue which may have influenced the overall results. Australian Clinical Trials Registry # 012605000354684.

  14. Optimization of Treatment Geometry to Reduce Normal Brain Dose in Radiosurgery of Multiple Brain Metastases with Single-Isocenter Volumetric Modulated Arc Therapy.

    PubMed

    Wu, Qixue; Snyder, Karen Chin; Liu, Chang; Huang, Yimei; Zhao, Bo; Chetty, Indrin J; Wen, Ning

    2016-09-30

    Treatment of patients with multiple brain metastases using a single-isocenter volumetric modulated arc therapy (VMAT) has been shown to decrease treatment time with the tradeoff of larger low dose to the normal brain tissue. We have developed an efficient Projection Summing Optimization Algorithm to optimize the treatment geometry in order to reduce dose to normal brain tissue for radiosurgery of multiple metastases with single-isocenter VMAT. The algorithm: (a) measures coordinates of outer boundary points of each lesion to be treated using the Eclipse Scripting Application Programming Interface, (b) determines the rotations of couch, collimator, and gantry using three matrices about the cardinal axes, (c) projects the outer boundary points of the lesion on to Beam Eye View projection plane, (d) optimizes couch and collimator angles by selecting the least total unblocked area for each specific treatment arc, and (e) generates a treatment plan with the optimized angles. The results showed significant reduction in the mean dose and low dose volume to normal brain, while maintaining the similar treatment plan qualities on the thirteen patients treated previously. The algorithm has the flexibility with regard to the beam arrangements and can be integrated in the treatment planning system for clinical application directly.

  15. Automated segmentation of cardiac visceral fat in low-dose non-contrast chest CT images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Liang, Mingzhu; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.

    2015-03-01

    Cardiac visceral fat was segmented from low-dose non-contrast chest CT images using a fully automated method. Cardiac visceral fat is defined as the fatty tissues surrounding the heart region, enclosed by the lungs and posterior to the sternum. It is measured by constraining the heart region with an Anatomy Label Map that contains robust segmentations of the lungs and other major organs and estimating the fatty tissue within this region. The algorithm was evaluated on 124 low-dose and 223 standard-dose non-contrast chest CT scans from two public datasets. Based on visual inspection, 343 cases had good cardiac visceral fat segmentation. For quantitative evaluation, manual markings of cardiac visceral fat regions were made in 3 image slices for 45 low-dose scans and the Dice similarity coefficient (DSC) was computed. The automated algorithm achieved an average DSC of 0.93. Cardiac visceral fat volume (CVFV), heart region volume (HRV) and their ratio were computed for each case. The correlation between cardiac visceral fat measurement and coronary artery and aortic calcification was also evaluated. Results indicated the automated algorithm for measuring cardiac visceral fat volume may be an alternative method to the traditional manual assessment of thoracic region fat content in the assessment of cardiovascular disease risk.

  16. A Decision Processing Algorithm for CDC Location Under Minimum Cost SCM Network

    NASA Astrophysics Data System (ADS)

    Park, N. K.; Kim, J. Y.; Choi, W. Y.; Tian, Z. M.; Kim, D. J.

    Location of CDC in the matter of network on Supply Chain is becoming on the high concern these days. Present status of methods on CDC has been mainly based on the calculation manually by the spread sheet to achieve the goal of minimum logistics cost. This study is focused on the development of new processing algorithm to overcome the limit of present methods, and examination of the propriety of this algorithm by case study. The algorithm suggested by this study is based on the principle of optimization on the directive GRAPH of SCM model and suggest the algorithm utilizing the traditionally introduced MST, shortest paths finding methods, etc. By the aftermath of this study, it helps to assess suitability of the present on-going SCM network and could be the criterion on the decision-making process for the optimal SCM network building-up for the demand prospect in the future.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J; Zhang, W; Lu, J

    Purpose: To investigate the accuracy and feasibility of dose calculations using kilovoltage cone beam computed tomography in cervical cancer radiotherapy using a correction algorithm. Methods: The Hounsfield units (HU) and electron density (HU-density) curve was obtained for both planning CT (pCT) and kilovoltage cone beam CT (CBCT) using a CIRS-062 calibration phantom. The pCT and kV-CBCT images have different HU values, and if the HU-density curve of CBCT was directly used to calculate dose in CBCT images may have a deviation on dose distribution. It is necessary to normalize the different HU values between pCT and CBCT. A HU correctionmore » algorithm was used for CBCT images (cCBCT). Fifteen intensity-modulated radiation therapy (IMRT) plans of cervical cancer were chosen, and the plans were transferred to the pCT and cCBCT data sets without any changes for dose calculations. Phantom and patient studies were carried out. The dose differences and dose distributions were compared between cCBCT plan and pCT plan. Results: The HU number of CBCT was measured by several times, and the maximum change was less than 2%. To compare with pCT, the CBCT and cCBCT has a discrepancy, the dose differences in CBCT and cCBCT images were 2.48%±0.65% (range: 1.3%∼3.8%) and 0.48%±0.21% (range: 0.1%∼0.82%) for phantom study, respectively. For dose calculation in patient images, the dose differences were 2.25%±0.43% (range: 1.4%∼3.4%) and 0.63%±0.35% (range: 0.13%∼0.97%), respectively. And for the dose distributions, the passing rate of cCBCT was higher than the CBCTs. Conclusion: The CBCT image for dose calculation is feasible in cervical cancer radiotherapy, and the correction algorithm offers acceptable accuracy. It will become a useful tool for adaptive radiation therapy.« less

  18. TU-D-201-05: Validation of Treatment Planning Dose Calculations: Experience Working with MPPG 5.a

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, J; Park, J; Kim, L

    2016-06-15

    Purpose: Newly published medical physics practice guideline (MPPG 5.a.) has set the minimum requirements for commissioning and QA of treatment planning dose calculations. We present our experience in the validation of a commercial treatment planning system based on MPPG 5.a. Methods: In addition to tests traditionally performed to commission a model-based dose calculation algorithm, extensive tests were carried out at short and extended SSDs, various depths, oblique gantry angles and off-axis conditions to verify the robustness and limitations of a dose calculation algorithm. A comparison between measured and calculated dose was performed based on validation tests and evaluation criteria recommendedmore » by MPPG 5.a. An ion chamber was used for the measurement of dose at points of interest, and diodes were used for photon IMRT/VMAT validations. Dose profiles were measured with a three-dimensional scanning system and calculated in the TPS using a virtual water phantom. Results: Calculated and measured absolute dose profiles were compared at each specified SSD and depth for open fields. The disagreement is easily identifiable with the difference curve. Subtle discrepancy has revealed the limitation of the measurement, e.g., a spike at the high dose region and an asymmetrical penumbra observed on the tests with an oblique MLC beam. The excellent results we had (> 98% pass rate on 3%/3mm gamma index) on the end-to-end tests for both IMRT and VMAT are attributed to the quality beam data and the good understanding of the modeling. The limitation of the model and the uncertainty of measurement were considered when comparing the results. Conclusion: The extensive tests recommended by the MPPG encourage us to understand the accuracy and limitations of a dose algorithm as well as the uncertainty of measurement. Our experience has shown how the suggested tests can be performed effectively to validate dose calculation models.« less

  19. Examination of the suitability of an implementation of the Jette localized heterogeneities fluence term L(1)(x,y,z) in an electron beam treatment planning algorithm

    NASA Astrophysics Data System (ADS)

    Rodebaugh, Raymond Francis, Jr.

    2000-11-01

    In this project we applied modifications of the Fermi- Eyges multiple scattering theory to attempt to achieve the goals of a fast, accurate electron dose calculation algorithm. The dose was first calculated for an ``average configuration'' based on the patient's anatomy using a modification of the Hogstrom algorithm. It was split into a measured central axis depth dose component based on the material between the source and the dose calculation point, and an off-axis component based on the physics of multiple coulomb scattering for the average configuration. The former provided the general depth dose characteristics along the beam fan lines, while the latter provided the effects of collimation. The Gaussian localized heterogeneities theory of Jette provided the lateral redistribution of the electron fluence by heterogeneities. Here we terminated Jette's infinite series of fluence redistribution terms after the second term. Experimental comparison data were collected for 1 cm thick x 1 cm diameter air and aluminum pillboxes using the Varian 2100C linear accelerator at Rush-Presbyterian- St. Luke's Medical Center. For an air pillbox, the algorithm results were in reasonable agreement with measured data at both 9 and 20 MeV. For the Aluminum pill box, there were significant discrepancies between the results of this algorithm and experiment. This was particularly apparent for the 9 MeV beam. Of course a one cm thick Aluminum heterogeneity is unlikely to be encountered in a clinical situation; the thickness, linear stopping power, and linear scattering power of Aluminum are all well above what would normally be encountered. We found that the algorithm is highly sensitive to the choice of the average configuration. This is an indication that the series of fluence redistribution terms does not converge fast enough to terminate after the second term. It also makes it difficult to apply the algorithm to cases where there are no a priori means of choosing the best average configuration or where there is a complex geometry containing both lowly and highly scattering heterogeneities. There is some hope of decreasing the sensitivity to the average configuration by including portions of the next term of the localized heterogeneities series.

  20. SU-E-T-219: Comprehensive Validation of the Electron Monte Carlo Dose Calculation Algorithm in RayStation Treatment Planning System for An Elekta Linear Accelerator with AgilityTM Treatment Head

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yi; Park, Yang-Kyun; Doppke, Karen P.

    2015-06-15

    Purpose: This study evaluated the performance of the electron Monte Carlo dose calculation algorithm in RayStation v4.0 for an Elekta machine with Agility™ treatment head. Methods: The machine has five electron energies (6–8 MeV) and five applicators (6×6 to 25×25 cm {sup 2}). The dose (cGy/MU at d{sub max}), depth dose and profiles were measured in water using an electron diode at 100 cm SSD for nine square fields ≥2×2 cm{sup 2} and four complex fields at normal incidence, and a 14×14 cm{sup 2} field at 15° and 30° incidence. The dose was also measured for three square fields ≥4×4more » cm{sup 2} at 98, 105 and 110 cm SSD. Using selected energies, the EBT3 radiochromic film was used for dose measurements in slab-shaped inhomogeneous phantoms and a breast phantom with surface curvature. The measured and calculated doses were analyzed using a gamma criterion of 3%/3 mm. Results: The calculated and measured doses varied by <3% for 116 of the 120 points, and <5% for the 4×4 cm{sup 2} field at 110 cm SSD at 9–18 MeV. The gamma analysis comparing the 105 pairs of in-water isodoses passed by >98.1%. The planar doses measured from films placed at 0.5 cm below a lung/tissue layer (12 MeV) and 1.0 cm below a bone/air layer (15 MeV) showed excellent agreement with calculations, with gamma passing by 99.9% and 98.5%, respectively. At the breast-tissue interface, the gamma passing rate is >98.8% at 12–18 MeV. The film results directly validated the accuracy of MU calculation and spatial dose distribution in presence of tissue inhomogeneity and surface curvature - situations challenging for simpler pencil-beam algorithms. Conclusion: The electron Monte Carlo algorithm in RayStation v4.0 is fully validated for clinical use for the Elekta Agility™ machine. The comprehensive validation included small fields, complex fields, oblique beams, extended distance, tissue inhomogeneity and surface curvature.« less

  1. Low dose reconstruction algorithm for differential phase contrast imaging.

    PubMed

    Wang, Zhentian; Huang, Zhifeng; Zhang, Li; Chen, Zhiqiang; Kang, Kejun; Yin, Hongxia; Wang, Zhenchang; Marco, Stampanoni

    2011-01-01

    Differential phase contrast imaging computed tomography (DPCI-CT) is a novel x-ray inspection method to reconstruct the distribution of refraction index rather than the attenuation coefficient in weakly absorbing samples. In this paper, we propose an iterative reconstruction algorithm for DPCI-CT which benefits from the new compressed sensing theory. We first realize a differential algebraic reconstruction technique (DART) by discretizing the projection process of the differential phase contrast imaging into a linear partial derivative matrix. In this way the compressed sensing reconstruction problem of DPCI reconstruction can be transformed to a resolved problem in the transmission imaging CT. Our algorithm has the potential to reconstruct the refraction index distribution of the sample from highly undersampled projection data. Thus it can significantly reduce the dose and inspection time. The proposed algorithm has been validated by numerical simulations and actual experiments.

  2. Explicit Filtering Based Low-Dose Differential Phase Reconstruction Algorithm with the Grating Interferometry.

    PubMed

    Jiang, Xiaolei; Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang

    2015-01-01

    X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm.

  3. Explicit Filtering Based Low-Dose Differential Phase Reconstruction Algorithm with the Grating Interferometry

    PubMed Central

    Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang

    2015-01-01

    X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm. PMID:26089971

  4. Algorithm Engineering: Concepts and Practice

    NASA Astrophysics Data System (ADS)

    Chimani, Markus; Klein, Karsten

    Over the last years the term algorithm engineering has become wide spread synonym for experimental evaluation in the context of algorithm development. Yet it implies even more. We discuss the major weaknesses of traditional "pen and paper" algorithmics and the ever-growing gap between theory and practice in the context of modern computer hardware and real-world problem instances. We present the key ideas and concepts of the central algorithm engineering cycle that is based on a full feedback loop: It starts with the design of the algorithm, followed by the analysis, implementation, and experimental evaluation. The results of the latter can then be reused for modifications to the algorithmic design, stronger or input-specific theoretic performance guarantees, etc. We describe the individual steps of the cycle, explaining the rationale behind them and giving examples of how to conduct these steps thoughtfully. Thereby we give an introduction to current algorithmic key issues like I/O-efficient or parallel algorithms, succinct data structures, hardware-aware implementations, and others. We conclude with two especially insightful success stories—shortest path problems and text search—where the application of algorithm engineering techniques led to tremendous performance improvements compared with previous state-of-the-art approaches.

  5. SU-E-T-616: Plan Quality Assessment of Both Treatment Planning System Dose and Measurement-Based 3D Reconstructed Dose in the Patient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olch, A

    2015-06-15

    Purpose: Systematic radiotherapy plan quality assessment promotes quality improvement. Software tools can perform this analysis by applying site-specific structure dose metrics. The next step is to similarly evaluate the quality of the dose delivery. This study defines metrics for acceptable doses to targets and normal organs for a particular treatment site and scores each plan accordingly. The input can be the TPS or the measurement-based 3D patient dose. From this analysis, one can determine whether the delivered dose distribution to the patient receives a score which is comparable to the TPS plan score, otherwise replanning may be indicated. Methods: Elevenmore » neuroblastoma patient plans were exported from Eclipse to the Quality Reports program. A scoring algorithm defined a score for each normal and target structure based on dose-volume parameters. Each plan was scored by this algorithm and the percentage of total possible points was obtained. Each plan also underwent IMRT QA measurements with a Mapcheck2 or ArcCheck. These measurements were input into the 3DVH program to compute the patient 3D dose distribution which was analyzed using the same scoring algorithm as the TPS plan. Results: The mean quality score for the TPS plans was 75.37% (std dev=14.15%) compared to 71.95% (std dev=13.45%) for the 3DVH dose distribution. For 3/11 plans, the 3DVH-based quality score was higher than the TPS score, by between 0.5 to 8.4 percentage points. Eight/11 plans scores decreased based on IMRT QA measurements by 1.2 to 18.6 points. Conclusion: Software was used to determine the degree to which the plan quality score differed between the TPS and measurement-based dose. Although the delivery score was generally in good agreement with the planned dose score, there were some that improved while there was one plan whose delivered dose quality was significantly less than planned. This methodology helps evaluate both planned and delivered dose quality. Sun Nuclear Corporation has provded a license for the software described.« less

  6. Locating the source of spreading in temporal networks

    NASA Astrophysics Data System (ADS)

    Huang, Qiangjuan; Zhao, Chengli; Zhang, Xue; Yi, Dongyun

    2017-02-01

    The topological structure of many real networks changes with time. Thus, locating the sources of a temporal network is a creative and challenging problem, as the enormous size of many real networks makes it unfeasible to observe the state of all nodes. In this paper, we propose an algorithm to solve this problem, named the backward temporal diffusion process. The proposed algorithm calculates the shortest temporal distance to locate the transmission source. We assume that the spreading process can be modeled as a simple diffusion process and by consensus dynamics. To improve the location accuracy, we also adopt four strategies to select which nodes should be observed by ranking their importance in the temporal network. Our paper proposes a highly accurate method for locating the source in temporal networks and is, to the best of our knowledge, a frontier work in this field. Moreover, our framework has important significance for controlling the transmission of diseases or rumors and formulating immediate immunization strategies.

  7. Dynamic node immunization for restraint of harmful information diffusion in social networks

    NASA Astrophysics Data System (ADS)

    Yang, Dingda; Liao, Xiangwen; Shen, Huawei; Cheng, Xueqi; Chen, Guolong

    2018-08-01

    To restrain the spread of harmful information is crucial for the healthy and sustainable development of social networks. We address the problem of restraining the spread of harmful information by immunizing nodes in the networks. Previous works have developed methods based on the network topology or studied how to immunize nodes in the presence of initial infected nodes. These static methods, in which nodes are immunized at once, may have poor performance in the certain situation due to the dynamics of diffusion. To tackle this problem, we introduce a new dynamic immunization problem of immunizing nodes during the process of the diffusion in this paper. We formulate the problem and propose a novel heuristic algorithm by dealing with two sub-problems: (1) how to select a node to achieve the best immunization effect at the present time? (2) whether the selected node should be immunized right now? Finally, we demonstrate the effectiveness of our algorithm through extensive experiments on various real datasets.

  8. Spread of the Tiger: Global Risk of Invasion by the Mosquito Aedes albopictus

    PubMed Central

    BENEDICT, MARK Q.; LEVINE, REBECCA S.; HAWLEY, WILLIAM A.; LOUNIBOS, L. PHILIP

    2008-01-01

    Aedes albopictus, commonly known as the Asian tiger mosquito, is currently the most invasive mosquito in the world. It is of medical importance due to its aggressive daytime human-biting behavior and ability to vector many viruses, including dengue, LaCrosse, and West Nile. Invasions into new areas of its potential range are often initiated through the transportation of eggs via the international trade in used tires. We use a genetic algorithm, Genetic Algorithm for Rule Set Production (GARP), to determine the ecological niche of Ae. albopictus and predict a global ecological risk map for the continued spread of the species. We combine this analysis with risk due to importation of tires from infested countries and their proximity to countries that have already been invaded to develop a list of countries most at risk for future introductions and establishments. Methods used here have potential for predicting risks of future invasions of vectors or pathogens. PMID:17417960

  9. Synthesis of atmospheric turbulence point spread functions by sparse and redundant representations

    NASA Astrophysics Data System (ADS)

    Hunt, Bobby R.; Iler, Amber L.; Bailey, Christopher A.; Rucci, Michael A.

    2018-02-01

    Atmospheric turbulence is a fundamental problem in imaging through long slant ranges, horizontal-range paths, or uplooking astronomical cases through the atmosphere. An essential characterization of atmospheric turbulence is the point spread function (PSF). Turbulence images can be simulated to study basic questions, such as image quality and image restoration, by synthesizing PSFs of desired properties. In this paper, we report on a method to synthesize PSFs of atmospheric turbulence. The method uses recent developments in sparse and redundant representations. From a training set of measured atmospheric PSFs, we construct a dictionary of "basis functions" that characterize the atmospheric turbulence PSFs. A PSF can be synthesized from this dictionary by a properly weighted combination of dictionary elements. We disclose an algorithm to synthesize PSFs from the dictionary. The algorithm can synthesize PSFs in three orders of magnitude less computing time than conventional wave optics propagation methods. The resulting PSFs are also shown to be statistically representative of the turbulence conditions that were used to construct the dictionary.

  10. SU-E-CAMPUS-T-02: Exploring Radiation Acoustics CT Dosimeter Design Aspects for Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alsanea, F; Moskvin, V; Stantz, K

    2014-06-15

    Purpose: Investigate the design aspects and imaging dose capabilities of the Radiation Acoustics Computed Tomography (RA CT) dosimeter for Proton induced acoustics, with the objective to characterize a pulsed pencil proton beam. The focus includes scanner geometry, transducer array, and transducer bandwidth on image quality. Methods: The geometry of the dosimeter is a cylindrical water phantom (length 40cm, radius 15cm) with 71 ultrasound transducers placed along the length and end of the cylinder to achieve a weighted set of projections with spherical sampling. A 3D filtered backprojection algorithm was used to reconstruct the dosimetric images and compared to MC dosemore » distribution. First, 3D Monte Carlo (MC) Dose distributions for proton beam energies (range of 12cm, 16cm, 20cm, and 27cm) were used to simulate the acoustic pressure signal within this scanner for a pulsed proton beam of 1.8x107 protons, with a pulse width of 1 microsecond and a rise time of 0.1 microseconds. Dose comparison within the Bragg peak and distal edge were compared to MC analysis, where the integrated Gaussian was used to locate the 50% dose of the distal edge. To evaluate spatial fidelity, a set of point sources within the scanner field of view (15×15×15cm3) were simulated implementing a low-pass bandwidth response function (0 to 1MHz) equivalent to a multiple frequency transducer array, and the FWHM of the point-spread-function determined. Results: From the reconstructed images, RACT and MC range values are within 0.5mm, and the average variation of the dose within the Bragg peak are within 2%. The spatial resolution tracked with transducer bandwidth and projection angle sampling, and can be kept at 1.5mm. Conclusion: This design is ready for fabrication to start acquiring measurements. The 15 cm FOV is an optimum size for imaging dosimetry. Currently, simulations comparing transducer sensitivity, bandwidth, and proton beam parameters are being evaluated to assess signal-to-noise.« less

  11. A comparison between simplified and intensive dose-titration algorithms using AIR inhaled insulin for insulin-naive patients with type 2 diabetes in a randomized noninferiority trial.

    PubMed

    Mathieu, C; Cuddihy, R; Arakaki, R F; Belin, R M; Planquois, J-M; Lyons, J N; Heilmann, C R

    2009-09-01

    Insulin initiation and optimization is a challenge for patients with type 2 diabetes. Our objective was to determine whether safety and efficacy of AIR inhaled insulin (Eli Lilly and Co., Indianapolis, IN) (AIR is a registered trademark of Alkermes, Inc., Cambridge, MA) using a simplified regimen was noninferior to an intensive regimen. This was an open-label, randomized study in insulin-naive adults not optimally controlled by oral antihyperglycemic medications. Simplified titration included a 6 U per meal AIR insulin starting dose. Individual doses were adjusted at mealtime in 2-U increments from the previous day's four-point self-monitored blood glucose (SMBG) (total < or =6 U). Starting Air insulin doses for intensive titration were based on fasting blood glucose, gender, height, and weight. Patients conducted four-point SMBG daily for the study duration. Insulin doses were titrated based on the previous 3 days' mean SMBG (total < or =8 U). End point hemoglobin A1C (A1C) was 7.07 +/- 0.09% and 6.87 +/- 0.09% for simplified (n = 178) and intensive (n = 180) algorithms, respectively. Noninferiority between algorithms was not established. The fasting blood glucose (least squares mean +/- standard error) values for the simplified (137.27 +/- 3.42 mg/dL) and intensive (133.13 +/- 3.42 mg/dL) algorithms were comparable. Safety profiles were comparable. The hypoglycemic rate at 4, 8, 12, and 24 weeks was higher in patients receiving intensive titration (all P < .0001). The nocturnal hypoglycemic rate for patients receiving intensive titration was higher than for those receiving simplified titration at 8 (P < 0.015) and 12 weeks (P < 0.001). Noninferiority between the algorithms, as measured by A1C, was not demonstrated. This finding re-emphasizes the difficulty of identifying optimal, simplified insulin regimens for patients.

  12. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zarepisheh, M; Li, R; Xing, L

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) andmore » aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves quality of resultant treatment plans as compared with conventional VMAT or IMRT treatments.« less

  13. SU-E-T-764: Track Repeating Algorithm for Proton Therapy Applied to Intensity Modulated Proton Therapy for Head-And-Neck Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yepes, P; Mirkovic, D; Mohan, R

    Purpose: To determine the suitability of fast Monte Carlo techniques for dose calculation in particle therapy based on track-repeating algorithm for Intensity Modulated Proton Therapy, IMPT. The application of this technique will make possible detailed retrospective studies of large cohort of patients, which may lead to a better determination of Relative Biological Effects from the analysis of patient data. Methods: A cohort of six head-and-neck patients treated at the University of Texas MD Anderson Cancer Center with IMPT were utilized. The dose distributions were calculated with the standard Treatment Plan System, TPS, MCNPX, GEANT4 and FDC, a fast track-repeating algorithmmore » for proton therapy for the verification and the patient plans. FDC is based on a GEANT4 database of trajectories of protons in a water. The obtained dose distributions were compared to each other utilizing the g-index criteria for 3mm-3% and 2mm-2%, for the maximum spatial and dose differences. The γ-index was calculated for voxels with a dose at least 10% of the maximum delivered dose. Dose Volume Histograms are also calculated for the various dose distributions. Results: Good agreement between GEANT4 and FDC is found with less than 1% of the voxels with a γ-index larger than 1 for 2 mm-2%. The agreement between MCNPX with FDC is within the requirements of clinical standards, even though it is slightly worse than the comparison with GEANT4.The comparison with TPS yielded larger differences, what is also to be expected because pencil beam algorithm do not always performed well in highly inhomogeneous areas like head-and-neck. Conclusion: The good agreement between a track-repeating algorithm and a full Monte Carlo for a large cohort of patients and a challenging, site like head-and-neck, opens the path to systematic and detailed studies of large cohorts, which may yield better understanding of biological effects.« less

  14. SU-C-BRC-04: Efficient Dose Calculation Algorithm for FFF IMRT with a Simplified Bivariate Gaussian Source Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, F; Park, J; Barraclough, B

    2016-06-15

    Purpose: To develop an efficient and accurate independent dose calculation algorithm with a simplified analytical source model for the quality assurance and safe delivery of Flattening Filter Free (FFF)-IMRT on an Elekta Versa HD. Methods: The source model consisted of a point source and a 2D bivariate Gaussian source, respectively modeling the primary photons and the combined effect of head scatter, monitor chamber backscatter and collimator exchange effect. The in-air fluence was firstly calculated by back-projecting the edges of beam defining devices onto the source plane and integrating the visible source distribution. The effect of the rounded MLC leaf end,more » tongue-and-groove and interleaf transmission was taken into account in the back-projection. The in-air fluence was then modified with a fourth degree polynomial modeling the cone-shaped dose distribution of FFF beams. Planar dose distribution was obtained by convolving the in-air fluence with a dose deposition kernel (DDK) consisting of the sum of three 2D Gaussian functions. The parameters of the source model and the DDK were commissioned using measured in-air output factors (Sc) and cross beam profiles, respectively. A novel method was used to eliminate the volume averaging effect of ion chambers in determining the DDK. Planar dose distributions of five head-and-neck FFF-IMRT plans were calculated and compared against measurements performed with a 2D diode array (MapCHECK™) to validate the accuracy of the algorithm. Results: The proposed source model predicted Sc for both 6MV and 10MV with an accuracy better than 0.1%. With a stringent gamma criterion (2%/2mm/local difference), the passing rate of the FFF-IMRT dose calculation was 97.2±2.6%. Conclusion: The removal of the flattening filter represents a simplification of the head structure which allows the use of a simpler source model for very accurate dose calculation. The proposed algorithm offers an effective way to ensure the safe delivery of FFF-IMRT.« less

  15. Inherent smoothness of intensity patterns for intensity modulated radiation therapy generated by simultaneous projection algorithms

    NASA Astrophysics Data System (ADS)

    Xiao, Ying; Michalski, Darek; Censor, Yair; Galvin, James M.

    2004-07-01

    The efficient delivery of intensity modulated radiation therapy (IMRT) depends on finding optimized beam intensity patterns that produce dose distributions, which meet given constraints for the tumour as well as any critical organs to be spared. Many optimization algorithms that are used for beamlet-based inverse planning are susceptible to large variations of neighbouring intensities. Accurately delivering an intensity pattern with a large number of extrema can prove impossible given the mechanical limitations of standard multileaf collimator (MLC) delivery systems. In this study, we apply Cimmino's simultaneous projection algorithm to the beamlet-based inverse planning problem, modelled mathematically as a system of linear inequalities. We show that using this method allows us to arrive at a smoother intensity pattern. Including nonlinear terms in the simultaneous projection algorithm to deal with dose-volume histogram (DVH) constraints does not compromise this property from our experimental observation. The smoothness properties are compared with those from other optimization algorithms which include simulated annealing and the gradient descent method. The simultaneous property of these algorithms is ideally suited to parallel computing technologies.

  16. Stereotactic radiotherapy of intrapulmonary lesions: comparison of different dose calculation algorithms for Oncentra MasterPlan®.

    PubMed

    Troeller, Almut; Garny, Sylvia; Pachmann, Sophia; Kantz, Steffi; Gerum, Sabine; Manapov, Farkhad; Ganswindt, Ute; Belka, Claus; Söhn, Matthias

    2015-02-22

    The use of high accuracy dose calculation algorithms, such as Monte Carlo (MC) and Collapsed Cone (CC) determine dose in inhomogeneous tissue more accurately than pencil beam (PB) algorithms. However, prescription protocols based on clinical experience with PB are often used for treatment plans calculated with CC. This may lead to treatment plans with changes in field size (FS) and changes in dose to organs at risk (OAR), especially for small tumor volumes in lung tissue treated with SABR. We re-evaluated 17 3D-conformal treatment plans for small intrapulmonary lesions with a prescription of 60 Gy in fractions of 7.5 Gy to the 80% isodose. All treatment plans were initially calculated in Oncentra MasterPlan® using a PB algorithm and recalculated with CC (CCre-calc). Furthermore, a CC-based plan with coverage similar to the PB plan (CCcov) and a CC plan with relaxed coverage criteria (CCclin), were created. The plans were analyzed in terms of Dmean, Dmin, Dmax and coverage for GTV, PTV and ITV. Changes in mean lung dose (MLD), V10Gy and V20Gy were evaluated for the lungs. The re-planned CC plans were compared to the original PB plans regarding changes in total monitor units (MU) and average FS. When PB plans were recalculated with CC, the average V60Gy of GTV, ITV and PTV decreased by 13.2%, 19.9% and 41.4%, respectively. Average Dmean decreased by 9% (GTV), 11.6% (ITV) and 14.2% (PTV). Dmin decreased by 18.5% (GTV), 21.3% (ITV) and 17.5% (PTV). Dmax declined by 7.5%. PTV coverage correlated with PTV volume (p < 0.001). MLD, V10Gy, and V20Gy were significantly reduced in the CC plans. Both, CCcov and CCclin had significantly increased MUs and FS compared to PB. Recalculation of PB plans for small lung lesions with CC showed a strong decline in dose and coverage in GTV, ITV and PTV, and declined dose in the lung. Thus, switching from a PB algorithm to CC, while aiming to obtain similar target coverage, can be associated with application of more MU and extension of radiotherapy fields, causing greater OAR exposition.

  17. Compressed sensing with gradient total variation for low-dose CBCT reconstruction

    NASA Astrophysics Data System (ADS)

    Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Seongchae; Huh, Young; Park, Justin C.; Lee, Byeonghun; Baek, Junghee; Kim, Eunyoung

    2015-06-01

    This paper describes the improvement of convergence speed with gradient total variation (GTV) in compressed sensing (CS) for low-dose cone-beam computed tomography (CBCT) reconstruction. We derive a fast algorithm for the constrained total variation (TV)-based a minimum number of noisy projections. To achieve this task we combine the GTV with a TV-norm regularization term to promote an accelerated sparsity in the X-ray attenuation characteristics of the human body. The GTV is derived from a TV and enforces more efficient computationally and faster in convergence until a desired solution is achieved. The numerical algorithm is simple and derives relatively fast convergence. We apply a gradient projection algorithm that seeks a solution iteratively in the direction of the projected gradient while enforcing a non-negatively of the found solution. In comparison with the Feldkamp, Davis, and Kress (FDK) and conventional TV algorithms, the proposed GTV algorithm showed convergence in ≤18 iterations, whereas the original TV algorithm needs at least 34 iterations in reducing 50% of the projections compared with the FDK algorithm in order to reconstruct the chest phantom images. Future investigation includes improving imaging quality, particularly regarding X-ray cone-beam scatter, and motion artifacts of CBCT reconstruction.

  18. Phasor based single-molecule localization microscopy in 3D (pSMLM-3D): An algorithm for MHz localization rates using standard CPUs

    NASA Astrophysics Data System (ADS)

    Martens, Koen J. A.; Bader, Arjen N.; Baas, Sander; Rieger, Bernd; Hohlbein, Johannes

    2018-03-01

    We present a fast and model-free 2D and 3D single-molecule localization algorithm that allows more than 3 × 106 localizations per second to be calculated on a standard multi-core central processing unit with localization accuracies in line with the most accurate algorithms currently available. Our algorithm converts the region of interest around a point spread function to two phase vectors (phasors) by calculating the first Fourier coefficients in both the x- and y-direction. The angles of these phasors are used to localize the center of the single fluorescent emitter, and the ratio of the magnitudes of the two phasors is a measure for astigmatism, which can be used to obtain depth information (z-direction). Our approach can be used both as a stand-alone algorithm for maximizing localization speed and as a first estimator for more time consuming iterative algorithms.

  19. Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions

    PubMed Central

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm. PMID:23737718

  20. Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.

    PubMed

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  1. Improving Vector Evaluated Particle Swarm Optimisation Using Multiple Nondominated Leaders

    PubMed Central

    Lim, Kian Sheng; Buyamin, Salinda; Ahmad, Anita; Shapiai, Mohd Ibrahim; Naim, Faradila; Mubin, Marizan; Kim, Dong Hwa

    2014-01-01

    The vector evaluated particle swarm optimisation (VEPSO) algorithm was previously improved by incorporating nondominated solutions for solving multiobjective optimisation problems. However, the obtained solutions did not converge close to the Pareto front and also did not distribute evenly over the Pareto front. Therefore, in this study, the concept of multiple nondominated leaders is incorporated to further improve the VEPSO algorithm. Hence, multiple nondominated solutions that are best at a respective objective function are used to guide particles in finding optimal solutions. The improved VEPSO is measured by the number of nondominated solutions found, generational distance, spread, and hypervolume. The results from the conducted experiments show that the proposed VEPSO significantly improved the existing VEPSO algorithms. PMID:24883386

  2. SU-E-T-48: A Multi-Institutional Study of Independent Dose Verification for Conventional, SRS and SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takahashi, R; Kamima, T; Tachibana, H

    2015-06-15

    Purpose: To show the results of a multi-institutional study of the independent dose verification for conventional, Stereotactic radiosurgery and body radiotherapy (SRS and SBRT) plans based on the action level of AAPM TG-114. Methods: This study was performed at 12 institutions in Japan. To eliminate the bias of independent dose verification program (Indp), all of the institutions used the same CT-based independent dose verification software (Simple MU Analysis, Triangle Products, JP) with the Clarkson-based algorithm. Eclipse (AAA, PBC), Pinnacle{sup 3} (Adaptive Convolve) and Xio (Superposition) were used as treatment planning system (TPS). The confidence limits (CL, Mean±2SD) for 18 sitesmore » (head, breast, lung, pelvis, etc.) were evaluated in comparison in dose between the TPS and the Indp. Results: A retrospective analysis of 6352 treatment fields was conducted. The CLs for conventional, SRS and SBRT were 1.0±3.7 %, 2.0±2.5 % and 6.2±4.4 %, respectively. In conventional plans, most of the sites showed within 5 % of TG-114 action level. However, there were the systematic difference (4.0±4.0 % and 2.5±5.8 % for breast and lung, respectively). In SRS plans, our results showed good agreement compared to the action level. In SBRT plans, the discrepancy between the Indp was variable depending on dose calculation algorithms of TPS. Conclusion: The impact of dose calculation algorithms for the TPS and the Indp affects the action level. It is effective to set the site-specific tolerances, especially for the site where inhomogeneous correction can affect dose distribution strongly.« less

  3. Performance of two commercial electron beam algorithms over regions close to the lung-mediastinum interface, against Monte Carlo simulation and point dosimetry in virtual and anthropomorphic phantoms.

    PubMed

    Ojala, J; Hyödynmaa, S; Barańczyk, R; Góra, E; Waligórski, M P R

    2014-03-01

    Electron radiotherapy is applied to treat the chest wall close to the mediastinum. The performance of the GGPB and eMC algorithms implemented in the Varian Eclipse treatment planning system (TPS) was studied in this region for 9 and 16 MeV beams, against Monte Carlo (MC) simulations, point dosimetry in a water phantom and dose distributions calculated in virtual phantoms. For the 16 MeV beam, the accuracy of these algorithms was also compared over the lung-mediastinum interface region of an anthropomorphic phantom, against MC calculations and thermoluminescence dosimetry (TLD). In the phantom with a lung-equivalent slab the results were generally congruent, the eMC results for the 9 MeV beam slightly overestimating the lung dose, and the GGPB results for the 16 MeV beam underestimating the lung dose. Over the lung-mediastinum interface, for 9 and 16 MeV beams, the GGPB code underestimated the lung dose and overestimated the dose in water close to the lung, compared to the congruent eMC and MC results. In the anthropomorphic phantom, results of TLD measurements and MC and eMC calculations agreed, while the GGPB code underestimated the lung dose. Good agreement between TLD measurements and MC calculations attests to the accuracy of "full" MC simulations as a reference for benchmarking TPS codes. Application of the GGPB code in chest wall radiotherapy may result in significant underestimation of the lung dose and overestimation of dose to the mediastinum, affecting plan optimization over volumes close to the lung-mediastinum interface, such as the lung or heart. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  4. Sci-Sat AM: Radiation Dosimetry and Practical Therapy Solutions - 04: On 3D Fabrication of Phantoms and Experimental Verification of Patient Dose Computation Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, Rao; Zavan, Rodolfo; McGeachy, Philip

    2016-08-15

    Purpose: Transport based dose calculation algorithm Acuros XB (AXB) has been shown to accurately account for heterogeneities mostly through comparisons with Monte Carlo simulations. This study aims at providing additional experimental verification for AXB for flattened and unflattened clinical energies in low density phantoms of the same material. Materials and Methods: Polystyrene slabs were created using a bench-top 3D printer. Six slabs were printed at varying densities from 0.23 g/cm{sup 3} to 0.68 g/cm{sup 3}, corresponding to different density humanoid tissues. The slabs were used to form different single and multilayer geometries. Dose was calculated with AXB 11.0.31 for 6MV,more » 15MV flattened and 6FFF (flattening filter free) energies for field sizes of 2×2 cm{sup 2} and 5×5 cm{sup 2}. The phantoms containing radiochromic EBT3 films were irradiated. Absolute dose profiles and 2D gamma analyses were performed for 96 dose planes. Results: For all single slab, multislab configurations and energies, absolute dose differences between the AXB calculation and film measurements remained <3% for both fields, with slightly poor disagreement in penumbra. The gamma index at 2% / 2mm averaged 98% in all combinations of fields, phantoms and photon energies. Conclusions: The transport based dose algorithm AXB is in good agreement with the experimental measurements for small field sizes using 6MV, 6FFF and 15MV beams adjacent to low density heterogeneous media. This work provides sufficient experimental ground to support the use of AXB for heterogeneous dose calculation purposes.« less

  5. Population pharmacokinetics of busulfan in pediatric and young adult patients undergoing hematopoietic cell transplant: a model-based dosing algorithm for personalized therapy and implementation into routine clinical use.

    PubMed

    Long-Boyle, Janel R; Savic, Rada; Yan, Shirley; Bartelink, Imke; Musick, Lisa; French, Deborah; Law, Jason; Horn, Biljana; Cowan, Morton J; Dvorak, Christopher C

    2015-04-01

    Population pharmacokinetic (PK) studies of busulfan in children have shown that individualized model-based algorithms provide improved targeted busulfan therapy when compared with conventional dose guidelines. The adoption of population PK models into routine clinical practice has been hampered by the tendency of pharmacologists to develop complex models too impractical for clinicians to use. The authors aimed to develop a population PK model for busulfan in children that can reliably achieve therapeutic exposure (concentration at steady state) and implement a simple model-based tool for the initial dosing of busulfan in children undergoing hematopoietic cell transplantation. Model development was conducted using retrospective data available in 90 pediatric and young adult patients who had undergone hematopoietic cell transplantation with busulfan conditioning. Busulfan drug levels and potential covariates influencing drug exposure were analyzed using the nonlinear mixed effects modeling software, NONMEM. The final population PK model was implemented into a clinician-friendly Microsoft Excel-based tool and used to recommend initial doses of busulfan in a group of 21 pediatric patients prospectively dosed based on the population PK model. Modeling of busulfan time-concentration data indicates that busulfan clearance displays nonlinearity in children, decreasing up to approximately 20% between the concentrations of 250-2000 ng/mL. Important patient-specific covariates found to significantly impact busulfan clearance were actual body weight and age. The percentage of individuals achieving a therapeutic concentration at steady state was significantly higher in subjects receiving initial doses based on the population PK model (81%) than in historical controls dosed on conventional guidelines (52%) (P = 0.02). When compared with the conventional dosing guidelines, the model-based algorithm demonstrates significant improvement for providing targeted busulfan therapy in children and young adults.

  6. Advanced Optimal Extraction for the Spitzer/IRS

    NASA Astrophysics Data System (ADS)

    Lebouteiller, V.; Bernard-Salas, J.; Sloan, G. C.; Barry, D. J.

    2010-02-01

    We present new advances in the spectral extraction of pointlike sources adapted to the Infrared Spectrograph (IRS) on board the Spitzer Space Telescope. For the first time, we created a supersampled point-spread function of the low-resolution modules. We describe how to use the point-spread function to perform optimal extraction of a single source and of multiple sources within the slit. We also examine the case of the optimal extraction of one or several sources with a complex background. The new algorithms are gathered in a plug-in called AdOpt which is part of the SMART data analysis software.

  7. Aerosol transmission of foot-and-mouth disease virus Asia-1 under experimental conditions.

    PubMed

    Colenutt, C; Gonzales, J L; Paton, D J; Gloster, J; Nelson, N; Sanders, C

    2016-06-30

    Foot-and-mouth disease virus (FMDV) control measures rely on understanding of virus transmission mechanisms. Direct contact between naïve and infected animals or spread by contaminated fomites is prevented by quarantines and rigorous decontamination procedures during outbreaks. Transmission of FMDV by aerosol may not be prevented by these control measures and this route of transmission may allow infection of animals at distance from the infection source. Understanding the potential for aerosol spread of specific FMDV strains is important for informing control strategies in an outbreak. Here, the potential for transmission of an FMDV Asia 1 strain between pigs and cattle by indirect aerosol exposure was evaluated in an experimental setting. Four naïve calves were exposed to aerosols emitted from three infected pigs in an adjacent room for a 10h period. Direct contact between pigs and cattle and fomite transfer between rooms was prevented. Viral titres in aerosols emitted by the infected pigs were measured to estimate the dose that calves were exposed to. One of the calves developed clinical signs of FMD, whilst there was serological evidence for spread to cattle by aerosol transmission in the remaining three calves. This highlights the possibility that this FMDV Asia 1 strain could be spread by aerosol transmission given appropriate environmental conditions should an outbreak occur in pigs. Our estimates suggest the exposure dose required for aerosol transmission was higher than has been previously quantified for other serotypes, implying that aerosols are less likely to play a significant role in transmission and spread of this FMDV strain. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. On the sensitivity of TG-119 and IROC credentialing to TPS commissioning errors.

    PubMed

    McVicker, Drew; Yin, Fang-Fang; Adamson, Justus D

    2016-01-08

    We investigate the sensitivity of IMRT commissioning using the TG-119 C-shape phantom and credentialing with the IROC head and neck phantom to treatment planning system commissioning errors. We introduced errors into the various aspects of the commissioning process for a 6X photon energy modeled using the analytical anisotropic algorithm within a commercial treatment planning system. Errors were implemented into the various components of the dose calculation algorithm including primary photons, secondary photons, electron contamination, and MLC parameters. For each error we evaluated the probability that it could be committed unknowingly during the dose algorithm commissioning stage, and the probability of it being identified during the verification stage. The clinical impact of each commissioning error was evaluated using representative IMRT plans including low and intermediate risk prostate, head and neck, mesothelioma, and scalp; the sensitivity of the TG-119 and IROC phantoms was evaluated by comparing dosimetric changes to the dose planes where film measurements occur and change in point doses where dosimeter measurements occur. No commissioning errors were found to have both a low probability of detection and high clinical severity. When errors do occur, the IROC credentialing and TG 119 commissioning criteria are generally effective at detecting them; however, for the IROC phantom, OAR point-dose measurements are the most sensitive despite being currently excluded from IROC analysis. Point-dose measurements with an absolute dose constraint were the most effective at detecting errors, while film analysis using a gamma comparison and the IROC film distance to agreement criteria were less effective at detecting the specific commissioning errors implemented here.

  9. Achieving routine submillisievert CT scanning: report from the summit on management of radiation dose in CT.

    PubMed

    McCollough, Cynthia H; Chen, Guang Hong; Kalender, Willi; Leng, Shuai; Samei, Ehsan; Taguchi, Katsuyuki; Wang, Ge; Yu, Lifeng; Pettigrew, Roderic I

    2012-08-01

    This Special Report presents the consensus of the Summit on Management of Radiation Dose in Computed Tomography (CT) (held in February 2011), which brought together participants from academia, clinical practice, industry, and regulatory and funding agencies to identify the steps required to reduce the effective dose from routine CT examinations to less than 1 mSv. The most promising technologies and methods discussed at the summit include innovations and developments in x-ray sources; detectors; and image reconstruction, noise reduction, and postprocessing algorithms. Access to raw projection data and standard data sets for algorithm validation and optimization is a clear need, as is the need for new, clinically relevant metrics of image quality and diagnostic performance. Current commercially available techniques such as automatic exposure control, optimization of tube potential, beam-shaping filters, and dynamic z-axis collimators are important, and education to successfully implement these methods routinely is critically needed. Other methods that are just becoming widely available, such as iterative reconstruction, noise reduction, and postprocessing algorithms, will also have an important role. Together, these existing techniques can reduce dose by a factor of two to four. Technical advances that show considerable promise for additional dose reduction but are several years or more from commercial availability include compressed sensing, volume of interest and interior tomography techniques, and photon-counting detectors. This report offers a strategic roadmap for the CT user and research and manufacturer communities toward routinely achieving effective doses of less than 1 mSv, which is well below the average annual dose from naturally occurring sources of radiation.

  10. A shape-based quality evaluation and reconstruction method for electrical impedance tomography.

    PubMed

    Antink, Christoph Hoog; Pikkemaat, Robert; Malmivuo, Jaakko; Leonhardt, Steffen

    2015-06-01

    Linear methods of reconstruction play an important role in medical electrical impedance tomography (EIT) and there is a wide variety of algorithms based on several assumptions. With the Graz consensus reconstruction algorithm for EIT (GREIT), a novel linear reconstruction algorithm as well as a standardized framework for evaluating and comparing methods of reconstruction were introduced that found widespread acceptance in the community. In this paper, we propose a two-sided extension of this concept by first introducing a novel method of evaluation. Instead of being based on point-shaped resistivity distributions, we use 2759 pairs of real lung shapes for evaluation that were automatically segmented from human CT data. Necessarily, the figures of merit defined in GREIT were adjusted. Second, a linear method of reconstruction that uses orthonormal eigenimages as training data and a tunable desired point spread function are proposed. Using our novel method of evaluation, this approach is compared to the classical point-shaped approach. Results show that most figures of merit improve with the use of eigenimages as training data. Moreover, the possibility of tuning the reconstruction by modifying the desired point spread function is shown. Finally, the reconstruction of real EIT data shows that higher contrasts and fewer artifacts can be achieved in ventilation- and perfusion-related images.

  11. A study of optimization techniques in HDR brachytherapy for the prostate

    NASA Astrophysics Data System (ADS)

    Pokharel, Ghana Shyam

    Several studies carried out thus far are in favor of dose escalation to the prostate gland to have better local control of the disease. But optimal way of delivery of higher doses of radiation therapy to the prostate without hurting neighboring critical structures is still debatable. In this study, we proposed that real time high dose rate (HDR) brachytherapy with highly efficient and effective optimization could be an alternative means of precise delivery of such higher doses. This approach of delivery eliminates the critical issues such as treatment setup uncertainties and target localization as in external beam radiation therapy. Likewise, dosimetry in HDR brachytherapy is not influenced by organ edema and potential source migration as in permanent interstitial implants. Moreover, the recent report of radiobiological parameters further strengthen the argument of using hypofractionated HDR brachytherapy for the management of prostate cancer. Firstly, we studied the essential features and requirements of real time HDR brachytherapy treatment planning system. Automating catheter reconstruction with fast editing tools, fast yet accurate dose engine, robust and fast optimization and evaluation engine are some of the essential requirements for such procedures. Moreover, in most of the cases we performed, treatment plan optimization took significant amount of time of overall procedure. So, making treatment plan optimization automatic or semi-automatic with sufficient speed and accuracy was the goal of the remaining part of the project. Secondly, we studied the role of optimization function and constraints in overall quality of optimized plan. We have studied the gradient based deterministic algorithm with dose volume histogram (DVH) and more conventional variance based objective functions for optimization. In this optimization strategy, the relative weight of particular objective in aggregate objective function signifies its importance with respect to other objectives. Based on our study, DVH based objective function performed better than traditional variance based objective function in creating a clinically acceptable plan when executed under identical conditions. Thirdly, we studied the multiobjective optimization strategy using both DVH and variance based objective functions. The optimization strategy was to create several Pareto optimal solutions by scanning the clinically relevant part of the Pareto front. This strategy was adopted to decouple optimization from decision such that user could select final solution from the pool of alternative solutions based on his/her clinical goals. The overall quality of treatment plan improved using this approach compared to traditional class solution approach. In fact, the final optimized plan selected using decision engine with DVH based objective was comparable to typical clinical plan created by an experienced physicist. Next, we studied the hybrid technique comprising both stochastic and deterministic algorithm to optimize both dwell positions and dwell times. The simulated annealing algorithm was used to find optimal catheter distribution and the DVH based algorithm was used to optimize 3D dose distribution for given catheter distribution. This unique treatment planning and optimization tool was capable of producing clinically acceptable highly reproducible treatment plans in clinically reasonable time. As this algorithm was able to create clinically acceptable plans within clinically reasonable time automatically, it is really appealing for real time procedures. Next, we studied the feasibility of multiobjective optimization using evolutionary algorithm for real time HDR brachytherapy for the prostate. The algorithm with properly tuned algorithm specific parameters was able to create clinically acceptable plans within clinically reasonable time. However, the algorithm was let to run just for limited number of generations not considered optimal, in general, for such algorithms. This was done to keep time window desirable for real time procedures. Therefore, it requires further study with improved conditions to realize the full potential of the algorithm.

  12. Dose algorithm for EXTRAD 4100S extremity dosimeter for use at Sandia National Laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potter, Charles Augustus

    An updated algorithm for the EXTRAD 4100S extremity dosimeter has been derived. This algorithm optimizes the binning of dosimeter element ratios and uses a quadratic function to determine the response factors for low response ratios. This results in lower systematic bias across all test categories and eliminates the need for the 'red strap' algorithm that was used for high energy beta/gamma emitting radionuclides. The Radiation Protection Dosimetry Program (RPDP) at Sandia National Laboratories uses the Thermo Fisher EXTRAD 4100S extremity dosimeter, shown in Fig 1.1 to determine shallow dose to the extremities of potentially exposed individuals. This dosimeter consists ofmore » two LiF TLD elements or 'chipstrates', one of TLD-700 ({sup 7}Li) and one of TLD-100 (natural Li) separated by a tin filter. Following readout and background subtraction, the ratio of the responses of the two elements is determined defining the penetrability of the incident radiation. While this penetrability approximates the incident energy of the radiation, X-rays and beta particles exist in energy distributions that make determination of dose conversion factors less straightforward in their determination.« less

  13. Patient-specific quantification of image quality: An automated method for measuring spatial resolution in clinical CT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, Jeremiah, E-mail: jeremiah.sanders@duke.e

    Purpose: To develop and validate an automated technique for evaluating the spatial resolution characteristics of clinical computed tomography (CT) images. Methods: Twenty one chest and abdominopelvic clinical CT datasets were examined in this study. An algorithm was developed to extract a CT resolution index (RI) analogous to the modulation transfer function from clinical CT images by measuring the edge-spread function (ESF) across the patient’s skin. A polygon mesh of the air-skin boundary was created. The faces of the mesh were then used to measure the ESF across the air-skin interface. The ESF was differentiated to obtain the line-spread function (LSF),more » and the LSF was Fourier transformed to obtain the RI. The algorithm’s ability to detect the radial dependence of the RI was investigated. RIs measured with the proposed method were compared with a conventional phantom-based method across two reconstruction algorithms (FBP and iterative) using the spatial frequency at 50% RI, f{sub 50}, as the metric for comparison. Three reconstruction kernels were investigated for each reconstruction algorithm. Finally, an observer study was conducted to determine if observers could visually perceive the differences in the measured blurriness of images reconstructed with a given reconstruction method. Results: RI measurements performed with the proposed technique exhibited the expected dependencies on the image reconstruction. The measured f{sub 50} values increased with harder kernels for both FBP and iterative reconstruction. Furthermore, the proposed algorithm was able to detect the radial dependence of the RI. Patient-specific measurements of the RI were comparable to the phantom-based technique, but the patient data exhibited a large spread in the measured f{sub 50}, indicating that some datasets were blurrier than others even when the projection data were reconstructed with the same reconstruction algorithm and kernel. Results from the observer study substantiated this finding. Conclusions: Clinically informed, patient-specific spatial resolution can be measured from clinical datasets. The method is sufficiently sensitive to reflect changes in spatial resolution due to different reconstruction parameters. The method can be applied to automatically assess the spatial resolution of patient images and quantify dependencies that may not be captured in phantom data.« less

  14. Calibrating page sized Gafchromic EBT3 films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crijns, W.; Maes, F.; Heide, U. A. van der

    2013-01-15

    Purpose: The purpose is the development of a novel calibration method for dosimetry with Gafchromic EBT3 films. The method should be applicable for pretreatment verification of volumetric modulated arc, and intensity modulated radiotherapy. Because the exposed area on film can be large for such treatments, lateral scan errors must be taken into account. The correction for the lateral scan effect is obtained from the calibration data itself. Methods: In this work, the film measurements were modeled using their relative scan values (Transmittance, T). Inside the transmittance domain a linear combination and a parabolic lateral scan correction described the observed transmittancemore » values. The linear combination model, combined a monomer transmittance state (T{sub 0}) and a polymer transmittance state (T{sub {infinity}}) of the film. The dose domain was associated with the observed effects in the transmittance domain through a rational calibration function. On the calibration film only simple static fields were applied and page sized films were used for calibration and measurements (treatment verification). Four different calibration setups were considered and compared with respect to dose estimation accuracy. The first (I) used a calibration table from 32 regions of interest (ROIs) spread on 4 calibration films, the second (II) used 16 ROIs spread on 2 calibration films, the third (III), and fourth (IV) used 8 ROIs spread on a single calibration film. The calibration tables of the setups I, II, and IV contained eight dose levels delivered to different positions on the films, while for setup III only four dose levels were applied. Validation was performed by irradiating film strips with known doses at two different time points over the course of a week. Accuracy of the dose response and the lateral effect correction was estimated using the dose difference and the root mean squared error (RMSE), respectively. Results: A calibration based on two films was the optimal balance between cost effectiveness and dosimetric accuracy. The validation resulted in dose errors of 1%-2% for the two different time points, with a maximal absolute dose error around 0.05 Gy. The lateral correction reduced the RMSE values on the sides of the film to the RMSE values at the center of the film. Conclusions: EBT3 Gafchromic films were calibrated for large field dosimetry with a limited number of page sized films and simple static calibration fields. The transmittance was modeled as a linear combination of two transmittance states, and associated with dose using a rational calibration function. Additionally, the lateral scan effect was resolved in the calibration function itself. This allows the use of page sized films. Only two calibration films were required to estimate both the dose and the lateral response. The calibration films were used over the course of a week, with residual dose errors Less-Than-Or-Slanted-Equal-To 2% or Less-Than-Or-Slanted-Equal-To 0.05 Gy.« less

  15. Evaluation of On-Board kV Cone Beam Computed Tomography–Based Dose Calculation With Deformable Image Registration Using Hounsfield Unit Modifications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onozato, Yusuke; Kadoya, Noriyuki, E-mail: kadoya.n@rad.med.tohoku.ac.jp; Fujita, Yukio

    2014-06-01

    Purpose: The purpose of this study was to estimate the accuracy of the dose calculation of On-Board Imager (Varian, Palo Alto, CA) cone beam computed tomography (CBCT) with deformable image registration (DIR), using the multilevel-threshold (MLT) algorithm and histogram matching (HM) algorithm in pelvic radiation therapy. Methods and Materials: One pelvis phantom and 10 patients with prostate cancer treated with intensity modulated radiation therapy were studied. To minimize the effect of organ deformation and different Hounsfield unit values between planning CT (PCT) and CBCT, we modified CBCT (mCBCT) with DIR by using the MLT (mCBCT{sub MLT}) and HM (mCBCT{sub HM})more » algorithms. To evaluate the accuracy of the dose calculation, we compared dose differences in dosimetric parameters (mean dose [D{sub mean}], minimum dose [D{sub min}], and maximum dose [D{sub max}]) for planning target volume, rectum, and bladder between PCT (reference) and CBCTs or mCBCTs. Furthermore, we investigated the effect of organ deformation compared with DIR and rigid registration (RR). We determined whether dose differences between PCT and mCBCTs were significantly lower than in CBCT by using Student t test. Results: For patients, the average dose differences in all dosimetric parameters of CBCT with DIR were smaller than those of CBCT with RR (eg, rectum; 0.54% for DIR vs 1.24% for RR). For the mCBCTs with DIR, the average dose differences in all dosimetric parameters were less than 1.0%. Conclusions: We evaluated the accuracy of the dose calculation in CBCT, mCBCT{sub MLT}, and mCBCT{sub HM} with DIR for 10 patients. The results showed that dose differences in D{sub mean}, D{sub min}, and D{sub max} in mCBCTs were within 1%, which were significantly better than those in CBCT, especially for the rectum (P<.05). Our results indicate that the mCBCT{sub MLT} and mCBCT{sub HM} can be useful for improving the dose calculation for adaptive radiation therapy.« less

  16. Iterative reconstruction for CT perfusion with a prior-image induced hybrid nonlocal means regularization: Phantom studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Bin; Lyu, Qingwen; Ma, Jianhua

    2016-04-15

    Purpose: In computed tomography perfusion (CTP) imaging, an initial phase CT acquired with a high-dose protocol can be used to improve the image quality of later phase CT acquired with a low-dose protocol. For dynamic regions, signals in the later low-dose CT may not be completely recovered if the initial CT heavily regularizes the iterative reconstruction process. The authors propose a hybrid nonlocal means (hNLM) regularization model for iterative reconstruction of low-dose CTP to overcome the limitation of the conventional prior-image induced penalty. Methods: The hybrid penalty was constructed by combining the NLM of the initial phase high-dose CT inmore » the stationary region and later phase low-dose CT in the dynamic region. The stationary and dynamic regions were determined by the similarity between the initial high-dose scan and later low-dose scan. The similarity was defined as a Gaussian kernel-based distance between the patch-window of the same pixel in the two scans, and its measurement was then used to weigh the influence of the initial high-dose CT. For regions with high similarity (e.g., stationary region), initial high-dose CT played a dominant role for regularizing the solution. For regions with low similarity (e.g., dynamic region), the regularization relied on a low-dose scan itself. This new hNLM penalty was incorporated into the penalized weighted least-squares (PWLS) for CTP reconstruction. Digital and physical phantom studies were performed to evaluate the PWLS-hNLM algorithm. Results: Both phantom studies showed that the PWLS-hNLM algorithm is superior to the conventional prior-image induced penalty term without considering the signal changes within the dynamic region. In the dynamic region of the Catphan phantom, the reconstruction error measured by root mean square error was reduced by 42.9% in PWLS-hNLM reconstructed image. Conclusions: The PWLS-hNLM algorithm can effectively use the initial high-dose CT to reconstruct low-dose CTP in the stationary region while reducing its influence in the dynamic region.« less

  17. SU-F-T-377: Monte Carlo Re-Evaluation of Volumetric-Modulated Arc Plans of Advanced Stage Nasopharygeal Cancers Optimized with Convolution-Superposition Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, K; Leung, R; Law, G

    Background: Commercial treatment planning system Pinnacle3 (Philips, Fitchburg, WI, USA) employs a convolution-superposition algorithm for volumetric-modulated arc radiotherapy (VMAT) optimization and dose calculation. Study of Monte Carlo (MC) dose recalculation of VMAT plans for advanced-stage nasopharyngeal cancers (NPC) is currently limited. Methods: Twenty-nine VMAT prescribed 70Gy, 60Gy, and 54Gy to the planning target volumes (PTVs) were included. These clinical plans achieved with a CS dose engine on Pinnacle3 v9.0 were recalculated by the Monaco TPS v5.0 (Elekta, Maryland Heights, MO, USA) with a XVMC-based MC dose engine. The MC virtual source model was built using the same measurement beam datasetmore » as for the Pinnacle beam model. All MC recalculation were based on absorbed dose to medium in medium (Dm,m). Differences in dose constraint parameters per our institution protocol (Supplementary Table 1) were analyzed. Results: Only differences in maximum dose to left brachial plexus, left temporal lobe and PTV54Gy were found to be statistically insignificant (p> 0.05). Dosimetric differences of other tumor targets and normal organs are found in supplementary Table 1. Generally, doses outside the PTV in the normal organs are lower with MC than with CS. This is also true in the PTV54-70Gy doses but higher dose in the nasal cavity near the bone interfaces is consistently predicted by MC, possibly due to the increased backscattering of short-range scattered photons and the secondary electrons that is not properly modeled by the CS. The straight shoulders of the PTV dose volume histograms (DVH) initially resulted from the CS optimization are merely preserved after MC recalculation. Conclusion: Significant dosimetric differences in VMAT NPC plans were observed between CS and MC calculations. Adjustments of the planning dose constraints to incorporate the physics differences from conventional CS algorithm should be made when VMAT optimization is carried out directly with MC dose engine.« less

  18. Effects of multiple spreaders in community networks

    NASA Astrophysics Data System (ADS)

    Hu, Zhao-Long; Ren, Zhuo-Ming; Yang, Guang-Yong; Liu, Jian-Guo

    2014-12-01

    Human contact networks exhibit the community structure. Understanding how such community structure affects the epidemic spreading could provide insights for preventing the spreading of epidemics between communities. In this paper, we explore the spreading of multiple spreaders in community networks. A network based on the clustering preferential mechanism is evolved, whose communities are detected by the Girvan-Newman (GN) algorithm. We investigate the spreading effectiveness by selecting the nodes as spreaders in the following ways: nodes with the largest degree in each community (community hubs), the same number of nodes with the largest degree from the global network (global large-degree) and randomly selected one node within each community (community random). The experimental results on the SIR model show that the spreading effectiveness based on the global large-degree and community hubs methods is the same in the early stage of the infection and the method of community random is the worst. However, when the infection rate exceeds the critical value, the global large-degree method embodies the worst spreading effectiveness. Furthermore, the discrepancy of effectiveness for the three methods will decrease as the infection rate increases. Therefore, we should immunize the hubs in each community rather than those hubs in the global network to prevent the outbreak of epidemics.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Liu, B; Liang, B

    Purpose: Current CyberKnife treatment planning system (TPS) provided two dose calculation algorithms: Ray-tracing and Monte Carlo. Ray-tracing algorithm is fast, but less accurate, and also can’t handle irregular fields since a multi-leaf collimator system was recently introduced to CyberKnife M6 system. Monte Carlo method has well-known accuracy, but the current version still takes a long time to finish dose calculations. The purpose of this paper is to develop a GPU-based fast C/S dose engine for CyberKnife system to achieve both accuracy and efficiency. Methods: The TERMA distribution from a poly-energetic source was calculated based on beam’s eye view coordinate system,more » which is GPU friendly and has linear complexity. The dose distribution was then computed by inversely collecting the energy depositions from all TERMA points along 192 collapsed-cone directions. EGSnrc user code was used to pre-calculate energy deposition kernels (EDKs) for a series of mono-energy photons The energy spectrum was reconstructed based on measured tissue maximum ratio (TMR) curve, the TERMA averaged cumulative kernels was then calculated. Beam hardening parameters and intensity profiles were optimized based on measurement data from CyberKnife system. Results: The difference between measured and calculated TMR are less than 1% for all collimators except in the build-up regions. The calculated profiles also showed good agreements with the measured doses within 1% except in the penumbra regions. The developed C/S dose engine was also used to evaluate four clinical CyberKnife treatment plans, the results showed a better dose calculation accuracy than Ray-tracing algorithm compared with Monte Carlo method for heterogeneous cases. For the dose calculation time, it takes about several seconds for one beam depends on collimator size and dose calculation grids. Conclusion: A GPU-based C/S dose engine has been developed for CyberKnife system, which was proven to be efficient and accurate for clinical purpose, and can be easily implemented in TPS.« less

  20. WE-DE-BRA-09: Fast Megavoltage CT Imaging with Rapid Scan Time and Low Imaging Dose in Helical Tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magome, T; University of Tokyo Hospital, Tokyo; University of Minnesota, Minneapolis, MN

    Purpose: Megavoltage computed tomography (MVCT) imaging has been widely used for daily patient setup with helical tomotherapy (HT). One drawback of MVCT is its very long imaging time, owing to slow couch speed. The purpose of this study was to develop an MVCT imaging method allowing faster couch speeds, and to assess its accuracy for image guidance for HT. Methods: Three cadavers (mimicking closest physiological and physical system of patients) were scanned four times with couch speeds of 1, 2, 3, and 4 mm/s. The resulting MVCT images were reconstructed using an iterative reconstruction (IR) algorithm. The MVCT images weremore » registered with kilovoltage CT images, and the registration errors were compared with the errors with conventional filtered back projection (FBP) algorithm. Moreover, the fast MVCT imaging was tested in three cases of total marrow irradiation as a clinical trial. Results: Three-dimensional registration errors of the MVCT images reconstructed with the IR algorithm were significantly smaller (p < 0.05) than the errors of images reconstructed with the FBP algorithm at fast couch speeds (3, 4 mm/s). The scan time and imaging dose at a speed of 4 mm/s were reduced to 30% of those from a conventional coarse mode scan. For the patient imaging, a limited number of conventional MVCT (1.2 mm/s) and fast MVCT (3 mm/s) reveals acceptable reduced imaging time and dose able to use for anatomical registration. Conclusion: Fast MVCT with IR algorithm maybe clinically feasible alternative for rapid 3D patient localization. This technique may also be useful for calculating daily dose distributions or organ motion analyses in HT treatment over a wide area.« less

Top