DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan Di; Liang Jian
Purpose: To construct expected treatment dose for adaptive inverse planning optimization, and evaluate it on head and neck (h and n) cancer adaptive treatment modification. Methods: Adaptive inverse planning engine was developed and integrated in our in-house adaptive treatment control system. The adaptive inverse planning engine includes an expected treatment dose constructed using the daily cone beam (CB) CT images in its objective and constrains. Feasibility of the adaptive inverse planning optimization was evaluated retrospectively using daily CBCT images obtained from the image guided IMRT treatment of 19 h and n cancer patients. Adaptive treatment modification strategies with respect tomore » the time and the number of adaptive inverse planning optimization during the treatment course were evaluated using the cumulative treatment dose in organs of interest constructed using all daily CBCT images. Results: Expected treatment dose was constructed to include both the delivered dose, to date, and the estimated dose for the remaining treatment during the adaptive treatment course. It was used in treatment evaluation, as well as in constructing the objective and constraints for adaptive inverse planning optimization. The optimization engine is feasible to perform planning optimization based on preassigned treatment modification schedule. Compared to the conventional IMRT, the adaptive treatment for h and n cancer illustrated clear dose-volume improvement for all critical normal organs. The dose-volume reductions of right and left parotid glands, spine cord, brain stem and mandible were (17 {+-} 6)%, (14 {+-} 6)%, (11 {+-} 6)%, (12 {+-} 8)%, and (5 {+-} 3)% respectively with the single adaptive modification performed after the second treatment week; (24 {+-} 6)%, (22 {+-} 8)%, (21 {+-} 5)%, (19 {+-} 8)%, and (10 {+-} 6)% with three weekly modifications; and (28 {+-} 5)%, (25 {+-} 9)%, (26 {+-} 5)%, (24 {+-} 8)%, and (15 {+-} 9)% with five weekly modifications. Conclusions: Adaptive treatment modification can be implemented including the expected treatment dose in the adaptive inverse planning optimization. The retrospective evaluation results demonstrate that utilizing the weekly adaptive inverse planning optimization, the dose distribution of h and n cancer treatment can be largely improved.« less
Jamema, Swamidas V; Kirisits, Christian; Mahantshetty, Umesh; Trnkova, Petra; Deshpande, Deepak D; Shrivastava, Shyam K; Pötter, Richard
2010-12-01
Comparison of inverse planning with the standard clinical plan and with the manually optimized plan based on dose-volume parameters and loading patterns. Twenty-eight patients who underwent MRI based HDR brachytherapy for cervix cancer were selected for this study. Three plans were calculated for each patient: (1) standard loading, (2) manual optimized, and (3) inverse optimized. Dosimetric outcomes from these plans were compared based on dose-volume parameters. The ratio of Total Reference Air Kerma of ovoid to tandem (TRAK(O/T)) was used to compare the loading patterns. The volume of HR CTV ranged from 9-68 cc with a mean of 41(±16.2) cc. Mean V100 for standard, manual optimized and inverse plans was found to be not significant (p=0.35, 0.38, 0.4). Dose to bladder (7.8±1.6 Gy) and sigmoid (5.6±1.4 Gy) was high for standard plans; Manual optimization reduced the dose to bladder (7.1±1.7 Gy p=0.006) and sigmoid (4.5±1.0 Gy p=0.005) without compromising the HR CTV coverage. The inverse plan resulted in a significant reduction to bladder dose (6.5±1.4 Gy, p=0.002). TRAK was found to be 0.49(±0.02), 0.44(±0.04) and 0.40(±0.04) cGy m(-2) for the standard loading, manual optimized and inverse plans, respectively. It was observed that TRAK(O/T) was 0.82(±0.05), 1.7(±1.04) and 1.41(±0.93) for standard loading, manual optimized and inverse plans, respectively, while this ratio was 1 for the traditional loading pattern. Inverse planning offers good sparing of critical structures without compromising the target coverage. The average loading pattern of the whole patient cohort deviates from the standard Fletcher loading pattern. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Trnková, Petra; Baltas, Dimos; Karabis, Andreas; Stock, Markus; Dimopoulos, Johannes; Georg, Dietmar; Pötter, Richard; Kirisits, Christian
2010-12-01
The purpose of this study was to compare two inverse planning algorithms for cervical cancer brachytherapy and a conventional manual treatment planning according to the MUW (Medical University of Vienna) protocol. For 20 patients, manually optimized, and, inversely optimized treatment plans with Hybrid Inverse treatment Planning and Optimization (HIPO) and with Inverse Planning Simulated Annealing (IPSA) were created. Dosimetric parameters, absolute volumes of normal tissue receiving reference doses, absolute loading times of tandem, ring and interstitial needles, Paddick and COIN conformity indices were evaluated. HIPO was able to achieve a similar dose distribution to manual planning with the restriction of high dose regions. It reduced the loading time of needles and the overall treatment time. The values of both conformity indices were the lowest. IPSA was able to achieve acceptable dosimetric results. However, it overloaded the needles. This resulted in high dose regions located in the normal tissue. The Paddick index for the volume of two times prescribed dose was outstandingly low. HIPO can produce clinically acceptable treatment plans with the elimination of high dose regions in normal tissue. Compared to IPSA, it is an inverse optimization method which takes into account current clinical experience gained from manual treatment planning.
Baltas, Dimos; Karabis, Andreas; Stock, Markus; Dimopoulos, Johannes; Georg, Dietmar; Pötter, Richard; Kirisits, Christian
2011-01-01
Purpose The purpose of this study was to compare two inverse planning algorithms for cervical cancer brachytherapy and a conventional manual treatment planning according to the MUW (Medical University of Vienna) protocol. Material and methods For 20 patients, manually optimized, and, inversely optimized treatment plans with Hybrid Inverse treatment Planning and Optimization (HIPO) and with Inverse Planning Simulated Annealing (IPSA) were created. Dosimetric parameters, absolute volumes of normal tissue receiving reference doses, absolute loading times of tandem, ring and interstitial needles, Paddick and COIN conformity indices were evaluated. Results HIPO was able to achieve a similar dose distribution to manual planning with the restriction of high dose regions. It reduced the loading time of needles and the overall treatment time. The values of both conformity indices were the lowest. IPSA was able to achieve acceptable dosimetric results. However, it overloaded the needles. This resulted in high dose regions located in the normal tissue. The Paddick index for the volume of two times prescribed dose was outstandingly low. Conclusions HIPO can produce clinically acceptable treatment plans with the elimination of high dose regions in normal tissue. Compared to IPSA, it is an inverse optimization method which takes into account current clinical experience gained from manual treatment planning. PMID:27853479
Fraass, Benedick A.; Steers, Jennifer M.; Matuszak, Martha M.; McShan, Daniel L.
2012-01-01
Purpose: Inverse planned intensity modulated radiation therapy (IMRT) has helped many centers implement highly conformal treatment planning with beamlet-based techniques. The many comparisons between IMRT and 3D conformal (3DCRT) plans, however, have been limited because most 3DCRT plans are forward-planned while IMRT plans utilize inverse planning, meaning both optimization and delivery techniques are different. This work avoids that problem by comparing 3D plans generated with a unique inverse planning method for 3DCRT called inverse-optimized 3D (IO-3D) conformal planning. Since IO-3D and the beamlet IMRT to which it is compared use the same optimization techniques, cost functions, and plan evaluation tools, direct comparisons between IMRT and simple, optimized IO-3D plans are possible. Though IO-3D has some similarity to direct aperture optimization (DAO), since it directly optimizes the apertures used, IO-3D is specifically designed for 3DCRT fields (i.e., 1–2 apertures per beam) rather than starting with IMRT-like modulation and then optimizing aperture shapes. The two algorithms are very different in design, implementation, and use. The goals of this work include using IO-3D to evaluate how close simple but optimized IO-3D plans come to nonconstrained beamlet IMRT, showing that optimization, rather than modulation, may be the most important aspect of IMRT (for some sites). Methods: The IO-3D dose calculation and optimization functionality is integrated in the in-house 3D planning/optimization system. New features include random point dose calculation distributions, costlet and cost function capabilities, fast dose volume histogram (DVH) and plan evaluation tools, optimization search strategies designed for IO-3D, and an improved, reimplemented edge/octree calculation algorithm. The IO-3D optimization, in distinction to DAO, is designed to optimize 3D conformal plans (one to two segments per beam) and optimizes MLC segment shapes and weights with various user-controllable search strategies which optimize plans without beamlet or pencil beam approximations. IO-3D allows comparisons of beamlet, multisegment, and conformal plans optimized using the same cost functions, dose points, and plan evaluation metrics, so quantitative comparisons are straightforward. Here, comparisons of IO-3D and beamlet IMRT techniques are presented for breast, brain, liver, and lung plans. Results: IO-3D achieves high quality results comparable to beamlet IMRT, for many situations. Though the IO-3D plans have many fewer degrees of freedom for the optimization, this work finds that IO-3D plans with only one to two segments per beam are dosimetrically equivalent (or nearly so) to the beamlet IMRT plans, for several sites. IO-3D also reduces plan complexity significantly. Here, monitor units per fraction (MU/Fx) for IO-3D plans were 22%–68% less than that for the 1 cm × 1 cm beamlet IMRT plans and 72%–84% than the 0.5 cm × 0.5 cm beamlet IMRT plans. Conclusions: The unique IO-3D algorithm illustrates that inverse planning can achieve high quality 3D conformal plans equivalent (or nearly so) to unconstrained beamlet IMRT plans, for many sites. IO-3D thus provides the potential to optimize flat or few-segment 3DCRT plans, creating less complex optimized plans which are efficient and simple to deliver. The less complex IO-3D plans have operational advantages for scenarios including adaptive replanning, cases with interfraction and intrafraction motion, and pediatric patients. PMID:22755717
Babier, Aaron; Boutilier, Justin J; Sharpe, Michael B; McNiven, Andrea L; Chan, Timothy C Y
2018-05-10
We developed and evaluated a novel inverse optimization (IO) model to estimate objective function weights from clinical dose-volume histograms (DVHs). These weights were used to solve a treatment planning problem to generate 'inverse plans' that had similar DVHs to the original clinical DVHs. Our methodology was applied to 217 clinical head and neck cancer treatment plans that were previously delivered at Princess Margaret Cancer Centre in Canada. Inverse plan DVHs were compared to the clinical DVHs using objective function values, dose-volume differences, and frequency of clinical planning criteria satisfaction. Median differences between the clinical and inverse DVHs were within 1.1 Gy. For most structures, the difference in clinical planning criteria satisfaction between the clinical and inverse plans was at most 1.4%. For structures where the two plans differed by more than 1.4% in planning criteria satisfaction, the difference in average criterion violation was less than 0.5 Gy. Overall, the inverse plans were very similar to the clinical plans. Compared with a previous inverse optimization method from the literature, our new inverse plans typically satisfied the same or more clinical criteria, and had consistently lower fluence heterogeneity. Overall, this paper demonstrates that DVHs, which are essentially summary statistics, provide sufficient information to estimate objective function weights that result in high quality treatment plans. However, as with any summary statistic that compresses three-dimensional dose information, care must be taken to avoid generating plans with undesirable features such as hotspots; our computational results suggest that such undesirable spatial features were uncommon. Our IO-based approach can be integrated into the current clinical planning paradigm to better initialize the planning process and improve planning efficiency. It could also be embedded in a knowledge-based planning or adaptive radiation therapy framework to automatically generate a new plan given a predicted or updated target DVH, respectively.
A gEUD-based inverse planning technique for HDR prostate brachytherapy: Feasibility study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giantsoudi, D.; Department of Radiation Oncology, Francis H. Burr Proton Therapy Center, Boston, Massachusetts 02114; Baltas, D.
2013-04-15
Purpose: The purpose of this work was to study the feasibility of a new inverse planning technique based on the generalized equivalent uniform dose for image-guided high dose rate (HDR) prostate cancer brachytherapy in comparison to conventional dose-volume based optimization. Methods: The quality of 12 clinical HDR brachytherapy implants for prostate utilizing HIPO (Hybrid Inverse Planning Optimization) is compared with alternative plans, which were produced through inverse planning using the generalized equivalent uniform dose (gEUD). All the common dose-volume indices for the prostate and the organs at risk were considered together with radiobiological measures. The clinical effectiveness of the differentmore » dose distributions was investigated by comparing dose volume histogram and gEUD evaluators. Results: Our results demonstrate the feasibility of gEUD-based inverse planning in HDR brachytherapy implants for prostate. A statistically significant decrease in D{sub 10} or/and final gEUD values for the organs at risk (urethra, bladder, and rectum) was found while improving dose homogeneity or dose conformity of the target volume. Conclusions: Following the promising results of gEUD-based optimization in intensity modulated radiation therapy treatment optimization, as reported in the literature, the implementation of a similar model in HDR brachytherapy treatment plan optimization is suggested by this study. The potential of improved sparing of organs at risk was shown for various gEUD-based optimization parameter protocols, which indicates the ability of this method to adapt to the user's preferences.« less
NASA Astrophysics Data System (ADS)
Babier, Aaron; Boutilier, Justin J.; Sharpe, Michael B.; McNiven, Andrea L.; Chan, Timothy C. Y.
2018-05-01
We developed and evaluated a novel inverse optimization (IO) model to estimate objective function weights from clinical dose-volume histograms (DVHs). These weights were used to solve a treatment planning problem to generate ‘inverse plans’ that had similar DVHs to the original clinical DVHs. Our methodology was applied to 217 clinical head and neck cancer treatment plans that were previously delivered at Princess Margaret Cancer Centre in Canada. Inverse plan DVHs were compared to the clinical DVHs using objective function values, dose-volume differences, and frequency of clinical planning criteria satisfaction. Median differences between the clinical and inverse DVHs were within 1.1 Gy. For most structures, the difference in clinical planning criteria satisfaction between the clinical and inverse plans was at most 1.4%. For structures where the two plans differed by more than 1.4% in planning criteria satisfaction, the difference in average criterion violation was less than 0.5 Gy. Overall, the inverse plans were very similar to the clinical plans. Compared with a previous inverse optimization method from the literature, our new inverse plans typically satisfied the same or more clinical criteria, and had consistently lower fluence heterogeneity. Overall, this paper demonstrates that DVHs, which are essentially summary statistics, provide sufficient information to estimate objective function weights that result in high quality treatment plans. However, as with any summary statistic that compresses three-dimensional dose information, care must be taken to avoid generating plans with undesirable features such as hotspots; our computational results suggest that such undesirable spatial features were uncommon. Our IO-based approach can be integrated into the current clinical planning paradigm to better initialize the planning process and improve planning efficiency. It could also be embedded in a knowledge-based planning or adaptive radiation therapy framework to automatically generate a new plan given a predicted or updated target DVH, respectively.
Real-time inverse planning for Gamma Knife radiosurgery.
Wu, Q Jackie; Chankong, Vira; Jitprapaikulsarn, Suradet; Wessels, Barry W; Einstein, Douglas B; Mathayomchan, Boonyanit; Kinsella, Timothy J
2003-11-01
The challenges of real-time Gamma Knife inverse planning are the large number of variables involved and the unknown search space a priori. With limited collimator sizes, shots have to be heavily overlapped to form a smooth prescription isodose line that conforms to the irregular target shape. Such overlaps greatly influence the total number of shots per plan, making pre-determination of the total number of shots impractical. However, this total number of shots usually defines the search space, a pre-requisite for most of the optimization methods. Since each shot only covers part of the target, a collection of shots in different locations and various collimator sizes selected makes up the global dose distribution that conforms to the target. Hence, planning or placing these shots is a combinatorial optimization process that is computationally expensive by nature. We have previously developed a theory of shot placement and optimization based on skeletonization. The real-time inverse planning process, reported in this paper, is an expansion and the clinical implementation of this theory. The complete planning process consists of two steps. The first step is to determine an optimal number of shots including locations and sizes and to assign initial collimator size to each of the shots. The second step is to fine-tune the weights using a linear-programming technique. The objective function is to minimize the total dose to the target boundary (i.e., maximize the dose conformity). Results of an ellipsoid test target and ten clinical cases are presented. The clinical cases are also compared with physician's manual plans. The target coverage is more than 99% for manual plans and 97% for all the inverse plans. The RTOG PITV conformity indices for the manual plans are between 1.16 and 3.46, compared to 1.36 to 2.4 for the inverse plans. All the inverse plans are generated in less than 2 min, making real-time inverse planning a reality.
NASA Astrophysics Data System (ADS)
Guthier, C.; Aschenbrenner, K. P.; Buergy, D.; Ehmann, M.; Wenz, F.; Hesser, J. W.
2015-03-01
This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.
Guthier, C; Aschenbrenner, K P; Buergy, D; Ehmann, M; Wenz, F; Hesser, J W
2015-03-21
This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.
Clinical knowledge-based inverse treatment planning
NASA Astrophysics Data System (ADS)
Yang, Yong; Xing, Lei
2004-11-01
Clinical IMRT treatment plans are currently made using dose-based optimization algorithms, which do not consider the nonlinear dose-volume effects for tumours and normal structures. The choice of structure specific importance factors represents an additional degree of freedom of the system and makes rigorous optimization intractable. The purpose of this work is to circumvent the two problems by developing a biologically more sensible yet clinically practical inverse planning framework. To implement this, the dose-volume status of a structure was characterized by using the effective volume in the voxel domain. A new objective function was constructed with the incorporation of the volumetric information of the system so that the figure of merit of a given IMRT plan depends not only on the dose deviation from the desired distribution but also the dose-volume status of the involved organs. The conventional importance factor of an organ was written into a product of two components: (i) a generic importance that parametrizes the relative importance of the organs in the ideal situation when the goals for all the organs are met; (ii) a dose-dependent factor that quantifies our level of clinical/dosimetric satisfaction for a given plan. The generic importance can be determined a priori, and in most circumstances, does not need adjustment, whereas the second one, which is responsible for the intractable behaviour of the trade-off seen in conventional inverse planning, was determined automatically. An inverse planning module based on the proposed formalism was implemented and applied to a prostate case and a head-neck case. A comparison with the conventional inverse planning technique indicated that, for the same target dose coverage, the critical structure sparing was substantially improved for both cases. The incorporation of clinical knowledge allows us to obtain better IMRT plans and makes it possible to auto-select the importance factors, greatly facilitating the inverse planning process. The new formalism proposed also reveals the relationship between different inverse planning schemes and gives important insight into the problem of therapeutic plan optimization. In particular, we show that the EUD-based optimization is a special case of the general inverse planning formalism described in this paper.
Ghobadi, Kimia; Ghaffari, Hamid R; Aleman, Dionne M; Jaffray, David A; Ruschin, Mark
2012-06-01
The purpose of this work is to develop a framework to the inverse problem for radiosurgery treatment planning on the Gamma Knife(®) Perfexion™ (PFX) for intracranial targets. The approach taken in the present study consists of two parts. First, a hybrid grassfire and sphere-packing algorithm is used to obtain shot positions (isocenters) based on the geometry of the target to be treated. For the selected isocenters, a sector duration optimization (SDO) model is used to optimize the duration of radiation delivery from each collimator size from each individual source bank. The SDO model is solved using a projected gradient algorithm. This approach has been retrospectively tested on seven manually planned clinical cases (comprising 11 lesions) including acoustic neuromas and brain metastases. In terms of conformity and organ-at-risk (OAR) sparing, the quality of plans achieved with the inverse planning approach were, on average, improved compared to the manually generated plans. The mean difference in conformity index between inverse and forward plans was -0.12 (range: -0.27 to +0.03) and +0.08 (range: 0.00-0.17) for classic and Paddick definitions, respectively, favoring the inverse plans. The mean difference in volume receiving the prescribed dose (V(100)) between forward and inverse plans was 0.2% (range: -2.4% to +2.0%). After plan renormalization for equivalent coverage (i.e., V(100)), the mean difference in dose to 1 mm(3) of brainstem between forward and inverse plans was -0.24 Gy (range: -2.40 to +2.02 Gy) favoring the inverse plans. Beam-on time varied with the number of isocenters but for the most optimal plans was on average 33 min longer than manual plans (range: -17 to +91 min) when normalized to a calibration dose rate of 3.5 Gy/min. In terms of algorithm performance, the isocenter selection for all the presented plans was performed in less than 3 s, while the SDO was performed in an average of 215 min. PFX inverse planning can be performed using geometric isocenter selection and mathematical modeling and optimization techniques. The obtained treatment plans all meet or exceed clinical guidelines while displaying high conformity. © 2012 American Association of Physicists in Medicine.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-07
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
NASA Astrophysics Data System (ADS)
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2016-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456
Role of step size and max dwell time in anatomy based inverse optimization for prostate implants
Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha
2013-01-01
In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323
Pardo-Montero, Juan; Fenwick, John D
2010-06-01
The purpose of this work is twofold: To further develop an approach to multiobjective optimization of rotational therapy treatments recently introduced by the authors [J. Pardo-Montero and J. D. Fenwick, "An approach to multiobjective optimization of rotational therapy," Med. Phys. 36, 3292-3303 (2009)], especially regarding its application to realistic geometries, and to study the quality (Pareto optimality) of plans obtained using such an approach by comparing them with Pareto optimal plans obtained through inverse planning. In the previous work of the authors, a methodology is proposed for constructing a large number of plans, with different compromises between the objectives involved, from a small number of geometrically based arcs, each arc prioritizing different objectives. Here, this method has been further developed and studied. Two different techniques for constructing these arcs are investigated, one based on image-reconstruction algorithms and the other based on more common gradient-descent algorithms. The difficulty of dealing with organs abutting the target, briefly reported in previous work of the authors, has been investigated using partial OAR unblocking. Optimality of the solutions has been investigated by comparison with a Pareto front obtained from inverse planning. A relative Euclidean distance has been used to measure the distance of these plans to the Pareto front, and dose volume histogram comparisons have been used to gauge the clinical impact of these distances. A prostate geometry has been used for the study. For geometries where a blocked OAR abuts the target, moderate OAR unblocking can substantially improve target dose distribution and minimize hot spots while not overly compromising dose sparing of the organ. Image-reconstruction type and gradient-descent blocked-arc computations generate similar results. The Pareto front for the prostate geometry, reconstructed using a large number of inverse plans, presents a hockey-stick shape comprising two regions: One where the dose to the target is close to prescription and trade-offs can be made between doses to the organs at risk and (small) changes in target dose, and one where very substantial rectal sparing is achieved at the cost of large target underdosage. Plans computed following the approach using a conformal arc and four blocked arcs generally lie close to the Pareto front, although distances of some plans from high gradient regions of the Pareto front can be greater. Only around 12% of plans lie a relative Euclidean distance of 0.15 or greater from the Pareto front. Using the alternative distance measure of Craft ["Calculating and controlling the error of discrete representations of Pareto surfaces in convex multi-criteria optimization," Phys. Medica (to be published)], around 2/5 of plans lie more than 0.05 from the front. Computation of blocked arcs is quite fast, the algorithms requiring 35%-80% of the running time per iteration needed for conventional inverse plan computation. The geometry-based arc approach to multicriteria optimization of rotational therapy allows solutions to be obtained that lie close to the Pareto front. Both the image-reconstruction type and gradient-descent algorithms produce similar modulated arcs, the latter one perhaps being preferred because it is more easily implementable in standard treatment planning systems. Moderate unblocking provides a good way of dealing with OARs which abut the PTV. Optimization of geometry-based arcs is faster than usual inverse optimization of treatment plans, making this approach more rapid than an inverse-based Pareto front reconstruction.
Optimization of equivalent uniform dose using the L-curve criterion.
Chvetsov, Alexei V; Dempsey, James F; Palta, Jatinder R
2007-10-07
Optimization of equivalent uniform dose (EUD) in inverse planning for intensity-modulated radiation therapy (IMRT) prevents variation in radiobiological effect between different radiotherapy treatment plans, which is due to variation in the pattern of dose nonuniformity. For instance, the survival fraction of clonogens would be consistent with the prescription when the optimized EUD is equal to the prescribed EUD. One of the problems in the practical implementation of this approach is that the spatial dose distribution in EUD-based inverse planning would be underdetermined because an unlimited number of nonuniform dose distributions can be computed for a prescribed value of EUD. Together with ill-posedness of the underlying integral equation, this may significantly increase the dose nonuniformity. To optimize EUD and keep dose nonuniformity within reasonable limits, we implemented into an EUD-based objective function an additional criterion which ensures the smoothness of beam intensity functions. This approach is similar to the variational regularization technique which was previously studied for the dose-based least-squares optimization. We show that the variational regularization together with the L-curve criterion for the regularization parameter can significantly reduce dose nonuniformity in EUD-based inverse planning.
Guthier, Christian V; Damato, Antonio L; Hesser, Juergen W; Viswanathan, Akila N; Cormack, Robert A
2017-12-01
Interstitial high-dose rate (HDR) brachytherapy is an important therapeutic strategy for the treatment of locally advanced gynecologic (GYN) cancers. The outcome of this therapy is determined by the quality of dose distribution achieved. This paper focuses on a novel yet simple heuristic for catheter selection for GYN HDR brachytherapy and their comparison against state of the art optimization strategies. The proposed technique is intended to act as a decision-supporting tool to select a favorable needle configuration. The presented heuristic for catheter optimization is based on a shrinkage-type algorithm (SACO). It is compared against state of the art planning in a retrospective study of 20 patients who previously received image-guided interstitial HDR brachytherapy using a Syed Neblett template. From those plans, template orientation and position are estimated via a rigid registration of the template with the actual catheter trajectories. All potential straight trajectories intersecting the contoured clinical target volume (CTV) are considered for catheter optimization. Retrospectively generated plans and clinical plans are compared with respect to dosimetric performance and optimization time. All plans were generated with one single run of the optimizer lasting 0.6-97.4 s. Compared to manual optimization, SACO yields a statistically significant (P ≤ 0.05) improved target coverage while at the same time fulfilling all dosimetric constraints for organs at risk (OARs). Comparing inverse planning strategies, dosimetric evaluation for SACO and "hybrid inverse planning and optimization" (HIPO), as gold standard, shows no statistically significant difference (P > 0.05). However, SACO provides the potential to reduce the number of used catheters without compromising plan quality. The proposed heuristic for needle selection provides fast catheter selection with optimization times suited for intraoperative treatment planning. Compared to manual optimization, the proposed methodology results in fewer catheters without a clinically significant loss in plan quality. The proposed approach can be used as a decision support tool that guides the user to find the ideal number and configuration of catheters. © 2017 American Association of Physicists in Medicine.
Na, Y; Suh, T; Xing, L
2012-06-01
Multi-objective (MO) plan optimization entails generation of an enormous number of IMRT or VMAT plans constituting the Pareto surface, which presents a computationally challenging task. The purpose of this work is to overcome the hurdle by developing an efficient MO method using emerging cloud computing platform. As a backbone of cloud computing for optimizing inverse treatment planning, Amazon Elastic Compute Cloud with a master node (17.1 GB memory, 2 virtual cores, 420 GB instance storage, 64-bit platform) is used. The master node is able to scale seamlessly a number of working group instances, called workers, based on the user-defined setting account for MO functions in clinical setting. Each worker solved the objective function with an efficient sparse decomposition method. The workers are automatically terminated if there are finished tasks. The optimized plans are archived to the master node to generate the Pareto solution set. Three clinical cases have been planned using the developed MO IMRT and VMAT planning tools to demonstrate the advantages of the proposed method. The target dose coverage and critical structure sparing of plans are comparable obtained using the cloud computing platform are identical to that obtained using desktop PC (Intel Xeon® CPU 2.33GHz, 8GB memory). It is found that the MO planning speeds up the processing of obtaining the Pareto set substantially for both types of plans. The speedup scales approximately linearly with the number of nodes used for computing. With the use of N nodes, the computational time is reduced by the fitting model, 0.2+2.3/N, with r̂2>0.99, on average of the cases making real-time MO planning possible. A cloud computing infrastructure is developed for MO optimization. The algorithm substantially improves the speed of inverse plan optimization. The platform is valuable for both MO planning and future off- or on-line adaptive re-planning. © 2012 American Association of Physicists in Medicine.
An efficient inverse radiotherapy planning method for VMAT using quadratic programming optimization.
Hoegele, W; Loeschel, R; Merkle, N; Zygmanski, P
2012-01-01
The purpose of this study is to investigate the feasibility of an inverse planning optimization approach for the Volumetric Modulated Arc Therapy (VMAT) based on quadratic programming and the projection method. The performance of this method is evaluated against a reference commercial planning system (eclipse(TM) for rapidarc(TM)) for clinically relevant cases. The inverse problem is posed in terms of a linear combination of basis functions representing arclet dose contributions and their respective linear coefficients as degrees of freedom. MLC motion is decomposed into basic motion patterns in an intuitive manner leading to a system of equations with a relatively small number of equations and unknowns. These equations are solved using quadratic programming under certain limiting physical conditions for the solution, such as the avoidance of negative dose during optimization and Monitor Unit reduction. The modeling by the projection method assures a unique treatment plan with beneficial properties, such as the explicit relation between organ weightings and the final dose distribution. Clinical cases studied include prostate and spine treatments. The optimized plans are evaluated by comparing isodose lines, DVH profiles for target and normal organs, and Monitor Units to those obtained by the clinical treatment planning system eclipse(TM). The resulting dose distributions for a prostate (with rectum and bladder as organs at risk), and for a spine case (with kidneys, liver, lung and heart as organs at risk) are presented. Overall, the results indicate that similar plan qualities for quadratic programming (QP) and rapidarc(TM) could be achieved at significantly more efficient computational and planning effort using QP. Additionally, results for the quasimodo phantom [Bohsung et al., "IMRT treatment planning: A comparative inter-system and inter-centre planning exercise of the estro quasimodo group," Radiother. Oncol. 76(3), 354-361 (2005)] are presented as an example for an extreme concave case. Quadratic programming is an alternative approach for inverse planning which generates clinically satisfying plans in comparison to the clinical system and constitutes an efficient optimization process characterized by uniqueness and reproducibility of the solution.
Carrara, Mauro; Cusumano, Davide; Giandini, Tommaso; Tenconi, Chiara; Mazzarella, Ester; Grisotto, Simone; Massari, Eleonora; Mazzeo, Davide; Cerrotta, Annamaria; Pappalardi, Brigida; Fallai, Carlo; Pignoli, Emanuele
2017-12-01
A direct planning approach with multi-channel vaginal cylinders (MVCs) used for HDR brachytherapy of vaginal cancers is particularly challenging. Purpose of this study was to compare the dosimetric performances of different forward and inverse methods used for the optimization of MVC-based vaginal treatments for endometrial cancer, with a particular attention to the definition of strategies useful to limit the high doses to the vaginal mucosa. Twelve postoperative vaginal HDR brachytherapy treatments performed with MVCs were considered. Plans were retrospectively optimized with three different methods: Dose Point Optimization followed by Graphical Optimization (DPO + GrO), Inverse Planning Simulated Annealing with two different class solutions as starting conditions (surflPSA and homogIPSA) and Hybrid Inverse Planning Optimization (HIPO). Several dosimetric parameters related to target coverage, hot spot extensions and sparing of organs at risk were analyzed to evaluate the quality of the achieved treatment plans. Dose homogeneity index (DHI), conformal index (COIN) and a further parameter quantifying the proportion of the central catheter loading with respect to the overall loading (i.e., the central catheter loading index: CCLI) were also quantified. The achieved PTV coverage parameters were highly correlated with each other but uncorrelated with the hot spot quantifiers. HomogIPSA and HIPO achieved higher DHIs and CCLIs and lower volumes of high doses than DPO + GrO and surflPSA. Within the investigated optimization methods, HIPO and homoglPSA showed the highest dose homogeneity to the target. In particular, homogIPSA resulted also the most effective in reducing hot spots to the vaginal mucosa. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Inverse planning in the age of digital LINACs: station parameter optimized radiation therapy (SPORT)
NASA Astrophysics Data System (ADS)
Xing, Lei; Li, Ruijiang
2014-03-01
The last few years have seen a number of technical and clinical advances which give rise to a need for innovations in dose optimization and delivery strategies. Technically, a new generation of digital linac has become available which offers features such as programmable motion between station parameters and high dose-rate Flattening Filter Free (FFF) beams. Current inverse planning methods are designed for traditional machines and cannot accommodate these features of new generation linacs without compromising either dose conformality and/or delivery efficiency. Furthermore, SBRT is becoming increasingly important, which elevates the need for more efficient delivery, improved dose distribution. Here we will give an overview of our recent work in SPORT designed to harness the digital linacs and highlight the essential components of SPORT. We will summarize the pros and cons of traditional beamlet-based optimization (BBO) and direct aperture optimization (DAO) and introduce a new type of algorithm, compressed sensing (CS)-based inverse planning, that is capable of automatically removing the redundant segments during optimization and providing a plan with high deliverability in the presence of a large number of station control points (potentially non-coplanar, non-isocentric, and even multi-isocenters). We show that CS-approach takes the interplay between planning and delivery into account and allows us to balance the dose optimality and delivery efficiency in a controlled way and, providing a viable framework to address various unmet demands of the new generation linacs. A few specific implementation strategies of SPORT in the forms of fixed-gantry and rotational arc delivery are also presented.
NASA Astrophysics Data System (ADS)
Holmes, Timothy W.
2001-01-01
A detailed tomotherapy inverse treatment planning method is described which incorporates leakage and head scatter corrections during each iteration of the optimization process, allowing these effects to be directly accounted for in the optimized dose distribution. It is shown that the conventional inverse planning method for optimizing incident intensity can be extended to include a `concurrent' leaf sequencing operation from which the leakage and head scatter corrections are determined. The method is demonstrated using the steepest-descent optimization technique with constant step size and a least-squared error objective. The method was implemented using the MATLAB scientific programming environment and its feasibility demonstrated for 2D test cases simulating treatment delivery using a single coplanar rotation. The results indicate that this modification does not significantly affect convergence of the intensity optimization method when exposure times of individual leaves are stratified to a large number of levels (>100) during leaf sequencing. In general, the addition of aperture dependent corrections, especially `head scatter', reduces incident fluence in local regions of the modulated fan beam, resulting in increased exposure times for individual collimator leaves. These local variations can result in 5% or greater local variation in the optimized dose distribution compared to the uncorrected case. The overall efficiency of the modified intensity optimization algorithm is comparable to that of the original unmodified case.
Intensity modulated neutron radiotherapy optimization by photon proxy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, Michael; Hammoud, Ahmad; Bossenberger, Todd
2012-08-15
Purpose: Introducing intensity modulation into neutron radiotherapy (IMNRT) planning has the potential to mitigate some normal tissue complications seen in past neutron trials. While the hardware to deliver IMNRT plans has been in use for several years, until recently the IMNRT planning process has been cumbersome and of lower fidelity than conventional photon plans. Our in-house planning system used to calculate neutron therapy plans allows beam weight optimization of forward planned segments, but does not provide inverse optimization capabilities. Commercial treatment planning systems provide inverse optimization capabilities, but currently cannot model our neutron beam. Methods: We have developed a methodologymore » and software suite to make use of the robust optimization in our commercial planning system while still using our in-house planning system to calculate final neutron dose distributions. Optimized multileaf collimator (MLC) leaf positions for segments designed in the commercial system using a 4 MV photon proxy beam are translated into static neutron ports that can be represented within our in-house treatment planning system. The true neutron dose distribution is calculated in the in-house system and then exported back through the MATLAB software into the commercial treatment planning system for evaluation. Results: The planning process produces optimized IMNRT plans that reduce dose to normal tissue structures as compared to 3D conformal plans using static MLC apertures. The process involves standard planning techniques using a commercially available treatment planning system, and is not significantly more complex than conventional IMRT planning. Using a photon proxy in a commercial optimization algorithm produces IMNRT plans that are more conformal than those previously designed at our center and take much less time to create. Conclusions: The planning process presented here allows for the optimization of IMNRT plans by a commercial treatment planning optimization algorithm, potentially allowing IMNRT to achieve similar conformality in treatment as photon IMRT. The only remaining requirements for the delivery of very highly modulated neutron treatments are incremental improvements upon already implemented hardware systems that should be readily achievable.« less
Automated IMRT planning with regional optimization using planning scripts
Wong, Eugene; Bzdusek, Karl; Lock, Michael; Chen, Jeff Z.
2013-01-01
Intensity‐modulated radiation therapy (IMRT) has become a standard technique in radiation therapy for treating different types of cancers. Various class solutions have been developed for simple cases (e.g., localized prostate, whole breast) to generate IMRT plans efficiently. However, for more complex cases (e.g., head and neck, pelvic nodes), it can be time‐consuming for a planner to generate optimized IMRT plans. To generate optimal plans in these more complex cases which generally have multiple target volumes and organs at risk, it is often required to have additional IMRT optimization structures such as dose limiting ring structures, adjust beam geometry, select inverse planning objectives and associated weights, and additional IMRT objectives to reduce cold and hot spots in the dose distribution. These parameters are generally manually adjusted with a repeated trial and error approach during the optimization process. To improve IMRT planning efficiency in these more complex cases, an iterative method that incorporates some of these adjustment processes automatically in a planning script is designed, implemented, and validated. In particular, regional optimization has been implemented in an iterative way to reduce various hot or cold spots during the optimization process that begins with defining and automatic segmentation of hot and cold spots, introducing new objectives and their relative weights into inverse planning, and turn this into an iterative process with termination criteria. The method has been applied to three clinical sites: prostate with pelvic nodes, head and neck, and anal canal cancers, and has shown to reduce IMRT planning time significantly for clinical applications with improved plan quality. The IMRT planning scripts have been used for more than 500 clinical cases. PACS numbers: 87.55.D, 87.55.de PMID:23318393
Shi, Chengyu; Guo, Bingqi; Cheng, Chih-Yao; Esquivel, Carlos; Eng, Tony; Papanikolaou, Niko
2010-07-01
The feasibility of intensity modulated brachytherapy (IMBT) to improve dose conformity for irregularly shaped targets has been previously investigated by researchers by means of using partially shielded sources. However, partial shielding does not fully explore the potential of IMBT. The goal of this study is to introduce the concept of three dimensional (3D) intensity modulated brachytherapy and solve two fundamental issues regarding the application of 3D IMBT treatment planning: The dose calculation algorithm and the inverse treatment planning method. A 3D IMBT treatment planning system prototype was developed using the MATLAB platform. This system consists of three major components: (1) A comprehensive IMBT source calibration method with dosimetric inputs from Monte Carlo (EGSnrc) simulations; (2) a "modified TG-43" (mTG-43) dose calculation formalism for IMBT dosimetry; and (3) a physical constraint based inverse IMBT treatment planning platform utilizing a simulated annealing optimization algorithm. The model S700 Axxent electronic brachytherapy source developed by Xoft, Inc. (Fremont, CA), was simulated in this application. Ten intracavitary accelerated partial breast irradiation (APBI) cases were studied. For each case, an "isotropic plan" with only optimized source dwell time and a fully optimized IMBT plan were generated and compared to the original plan in various dosimetric aspects, such as the plan quality, planning, and delivery time. The issue of the mechanical complexity of the IMBT applicator is not addressed in this study. IMBT approaches showed superior plan quality compared to the original plans and tht isotropic plans to different extents in all studied cases. An extremely difficult case with a small breast and a small distance to the ribs and skin, the IMBT plan minimized the high dose volume V200 by 16.1% and 4.8%, respectively, compared to the original and the isotropic plans. The conformity index for the target was increased by 0.13 and 0.04, respectively. The maximum dose to the skin was reduced by 56 and 28 cGy, respectively, per fraction. Also, the maximum dose to the ribs was reduced by 104 and 96 cGy, respectively, per fraction. The mean dose to the ipsilateral and contralateral breasts and lungs were also slightly reduced by the IMBT plan. The limitations of IMBT are the longer planning and delivery time. The IMBT plan took around 2 h to optimize, while the isotropic plan optimization could reach the global minimum within 5 min. The delivery time for the IMBT plan is typically four to six times longer than the corresponding isotropic plan. In this study, a dosimetry method for IMBT sources was proposed and an inverse treatment planning system prototype for IMBT was developed. The improvement of plan quality by 3D IMBT was demonstrated using ten APBI case studies. Faster computers and higher output of the source can further reduce plan optimization and delivery time, respectively.
Censor, Yair; Unkelbach, Jan
2011-01-01
In this paper we look at the development of radiation therapy treatment planning from a mathematical point of view. Historically, planning for Intensity-Modulated Radiation Therapy (IMRT) has been considered as an inverse problem. We discuss first the two fundamental approaches that have been investigated to solve this inverse problem: Continuous analytic inversion techniques on one hand, and fully-discretized algebraic methods on the other hand. In the second part of the paper, we review another fundamental question which has been subject to debate from the beginning of IMRT until the present day: The rotation therapy approach versus fixed angle IMRT. This builds a bridge from historic work on IMRT planning to contemporary research in the context of Intensity-Modulated Arc Therapy (IMAT). PMID:21616694
Pokharel, Shyam; Rana, Suresh; Blikenstaff, Joseph; Sadeghi, Amir; Prestidge, Bradley
2013-07-08
The purpose of this study is to investigate the effectiveness of the HIPO planning and optimization algorithm for real-time prostate HDR brachytherapy. This study consists of 20 patients who underwent ultrasound-based real-time HDR brachytherapy of the prostate using the treatment planning system called Oncentra Prostate (SWIFT version 3.0). The treatment plans for all patients were optimized using inverse dose-volume histogram-based optimization followed by graphical optimization (GRO) in real time. The GRO is manual manipulation of isodose lines slice by slice. The quality of the plan heavily depends on planner expertise and experience. The data for all patients were retrieved later, and treatment plans were created and optimized using HIPO algorithm with the same set of dose constraints, number of catheters, and set of contours as in the real-time optimization algorithm. The HIPO algorithm is a hybrid because it combines both stochastic and deterministic algorithms. The stochastic algorithm, called simulated annealing, searches the optimal catheter distributions for a given set of dose objectives. The deterministic algorithm, called dose-volume histogram-based optimization (DVHO), optimizes three-dimensional dose distribution quickly by moving straight downhill once it is in the advantageous region of the search space given by the stochastic algorithm. The PTV receiving 100% of the prescription dose (V100) was 97.56% and 95.38% with GRO and HIPO, respectively. The mean dose (D(mean)) and minimum dose to 10% volume (D10) for the urethra, rectum, and bladder were all statistically lower with HIPO compared to GRO using the student pair t-test at 5% significance level. HIPO can provide treatment plans with comparable target coverage to that of GRO with a reduction in dose to the critical structures.
Direct aperture optimization using an inverse form of back-projection.
Zhu, Xiaofeng; Cullip, Timothy; Tracton, Gregg; Tang, Xiaoli; Lian, Jun; Dooley, John; Chang, Sha X
2014-03-06
Direct aperture optimization (DAO) has been used to produce high dosimetric quality intensity-modulated radiotherapy (IMRT) treatment plans with fast treatment delivery by directly modeling the multileaf collimator segment shapes and weights. To improve plan quality and reduce treatment time for our in-house treatment planning system, we implemented a new DAO approach without using a global objective function (GFO). An index concept is introduced as an inverse form of back-projection used in the CT multiplicative algebraic reconstruction technique (MART). The index, introduced for IMRT optimization in this work, is analogous to the multiplicand in MART. The index is defined as the ratio of the optima over the current. It is assigned to each voxel and beamlet to optimize the fluence map. The indices for beamlets and segments are used to optimize multileaf collimator (MLC) segment shapes and segment weights, respectively. Preliminary data show that without sacrificing dosimetric quality, the implementation of the DAO reduced average IMRT treatment time from 13 min to 8 min for the prostate, and from 15 min to 9 min for the head and neck using our in-house treatment planning system PlanUNC. The DAO approach has also shown promise in optimizing rotational IMRT with burst mode in a head and neck test case.
SU-F-T-256: 4D IMRT Planning Using An Early Prototype GPU-Enabled Eclipse Workstation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagan, A; Modiri, A; Sawant, A
Purpose: True 4D IMRT planning, based on simultaneous spatiotemporal optimization has been shown to significantly improve plan quality in lung radiotherapy. However, the high computational complexity associated with such planning represents a significant barrier to widespread clinical deployment. We introduce an early prototype GPU-enabled Eclipse workstation for inverse planning. To our knowledge, this is the first GPUintegrated Eclipse system demonstrating the potential for clinical translation of GPU computing on a major commercially-available TPS. Methods: The prototype system comprised of four NVIDIA Tesla K80 GPUs, with a maximum processing capability of 8.5 Tflops per K80 card. The system architecture consisted ofmore » three key modules: (i) a GPU-based inverse planning module using a highly-parallelizable, swarm intelligence-based global optimization algorithm, (ii) a GPU-based open-source b-spline deformable image registration module, Elastix, and (iii) a CUDA-based data management module. For evaluation, aperture fluence weights in an IMRT plan were optimized over 9 beams,166 apertures and 10 respiratory phases (14940 variables) for a lung cancer case (GTV = 95 cc, right lower lobe, 15 mm cranio-caudal motion). Sensitivity of the planning time and memory expense to parameter variations was quantified. Results: GPU-based inverse planning was significantly accelerated compared to its CPU counterpart (36 vs 488 min, for 10 phases, 10 search agents and 10 iterations). The optimized IMRT plan significantly improved OAR sparing compared to the original internal target volume (ITV)-based clinical plan, while maintaining prescribed tumor coverage. The dose-sparing improvements were: Esophagus Dmax 50%, Heart Dmax 42% and Spinal cord Dmax 25%. Conclusion: Our early prototype system demonstrates that through massive parallelization, computationally intense tasks such as 4D treatment planning can be accomplished in clinically feasible timeframes. With further optimization, such systems are expected to enable the eventual clinical translation of higher-dimensional and complex treatment planning strategies to significantly improve plan quality. This work was partially supported through research funding from National Institutes of Health (R01CA169102) and Varian Medical Systems, Palo Alto, CA, USA.« less
Inverse 4D conformal planning for lung SBRT using particle swarm optimization
NASA Astrophysics Data System (ADS)
Modiri, A.; Gu, X.; Hagan, A.; Bland, R.; Iyengar, P.; Timmerman, R.; Sawant, A.
2016-08-01
A critical aspect of highly potent regimens such as lung stereotactic body radiation therapy (SBRT) is to avoid collateral toxicity while achieving planning target volume (PTV) coverage. In this work, we describe four dimensional conformal radiotherapy using a highly parallelizable swarm intelligence-based stochastic optimization technique. Conventional lung CRT-SBRT uses a 4DCT to create an internal target volume and then, using forward-planning, generates a 3D conformal plan. In contrast, we investigate an inverse-planning strategy that uses 4DCT data to create a 4D conformal plan, which is optimized across the three spatial dimensions (3D) as well as time, as represented by the respiratory phase. The key idea is to use respiratory motion as an additional degree of freedom. We iteratively adjust fluence weights for all beam apertures across all respiratory phases considering OAR sparing, PTV coverage and delivery efficiency. To demonstrate proof-of-concept, five non-small-cell lung cancer SBRT patients were retrospectively studied. The 4D optimized plans achieved PTV coverage comparable to the corresponding clinically delivered plans while showing significantly superior OAR sparing ranging from 26% to 83% for D max heart, 10%-41% for D max esophagus, 31%-68% for D max spinal cord and 7%-32% for V 13 lung.
Inverse 4D conformal planning for lung SBRT using particle swarm optimization
Modiri, A; Gu, X; Hagan, A; Bland, R; Iyengar, P; Timmerman, R; Sawant, A
2016-01-01
A critical aspect of highly potent regimens such as lung stereotactic body radiation therapy (SBRT) is to avoid collateral toxicity while achieving planning target volume (PTV) coverage. In this work, we describe four dimensional conformal radiotherapy (4D CRT) using a highly parallelizable swarm intelligence-based stochastic optimization technique. Conventional lung CRT-SBRT uses a 4DCT to create an internal target volume (ITV) and then, using forward-planning, generates a 3D conformal plan. In contrast, we investigate an inverse-planning strategy that uses 4DCT data to create a 4D conformal plan, which is optimized across the three spatial dimensions (3D) as well as time, as represented by the respiratory phase. The key idea is to use respiratory motion as an additional degree of freedom. We iteratively adjust fluence weights for all beam apertures across all respiratory phases considering OAR sparing, PTV coverage and delivery efficiency. To demonstrate proof-of-concept, five non-small-cell lung cancer SBRT patients were retrospectively studied. The 4D optimized plans achieved PTV coverage comparable to the corresponding clinically delivered plans while showing significantly superior OAR sparing ranging from 26% to 83% for Dmax heart, 10% to 41% for Dmax esophagus, 31% to 68% for Dmax spinal cord and 7% to 32% for V13 lung. PMID:27476472
Review: Optimization methods for groundwater modeling and management
NASA Astrophysics Data System (ADS)
Yeh, William W.-G.
2015-09-01
Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chi, Y; Li, Y; Tian, Z
2015-06-15
Purpose: Pencil-beam or superposition-convolution type dose calculation algorithms are routinely used in inverse plan optimization for intensity modulated radiation therapy (IMRT). However, due to their limited accuracy in some challenging cases, e.g. lung, the resulting dose may lose its optimality after being recomputed using an accurate algorithm, e.g. Monte Carlo (MC). It is the objective of this study to evaluate the feasibility and advantages of a new method to include MC in the treatment planning process. Methods: We developed a scheme to iteratively perform MC-based beamlet dose calculations and plan optimization. In the MC stage, a GPU-based dose engine wasmore » used and the particle number sampled from a beamlet was proportional to its optimized fluence from the previous step. We tested this scheme in four lung cancer IMRT cases. For each case, the original plan dose, plan dose re-computed by MC, and dose optimized by our scheme were obtained. Clinically relevant dosimetric quantities in these three plans were compared. Results: Although the original plan achieved a satisfactory PDV dose coverage, after re-computing doses using MC method, it was found that the PTV D95% were reduced by 4.60%–6.67%. After re-optimizing these cases with our scheme, the PTV coverage was improved to the same level as in the original plan, while the critical OAR coverages were maintained to clinically acceptable levels. Regarding the computation time, it took on average 144 sec per case using only one GPU card, including both MC-based beamlet dose calculation and treatment plan optimization. Conclusion: The achieved dosimetric gains and high computational efficiency indicate the feasibility and advantages of the proposed MC-based IMRT optimization method. Comprehensive validations in more patient cases are in progress.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Y; Huang, Z; Lo, S
2015-06-15
Purpose: To improve Gamma Knife SRS treatment efficiency for brain metastases and compare the differences of treatment time and radiobiological effects between two different planning methods of automatic filling and manual placement of shots with inverse planning. Methods: T1-weighted MRI images with gadolinium contrast from five patients with a single brain metastatic-lesion were used in this retrospective study. Among them, two were from primary breast cancer, two from primary melanoma cancer and one from primary prostate cancer. For each patient, two plans were generated in Leksell GammaPlan10.1.1 for radiosurgical treatment with a Leksell GammaKnife Perfexion machine: one with automatic filling,more » automatic sector configuration and inverse optimization (Method1); and the other with manual placement of shots, manual setup of collimator sizes, manual setup of sector blocking and inverse optimization (Method2). Dosimetric quality of the plans was evaluated with parameters of Coverage, Selectivity, Gradient-Index and DVH. Beam-on Time, Number-of-Shots and Tumor Control Probability(TCP) were compared for the two plans while keeping their dosimetric quality very similar. Relative reduction of Beam-on Time and Number-of-Shots were calculated as the ratios among the two plans and used for quantitative analysis. Results: With very similar dosimetric and radiobiological plan quality, plans created with Method 2 had significantly reduced treatment time. Relative reduction of Beam-on Time ranged from 20% to 51 % (median:29%,p=0.001), and reduction of Number-of-Shots ranged from 5% to 67% (median:40%,p=0.0002), respectively. Time of plan creation for Method1 and Method2 was similar, approximately 20 minutes, excluding the time for tumor delineation. TCP calculated for the tumors from differential DVHs did not show significant difference between the two plans (p=0.35). Conclusion: The method of manual setup combined with inverse optimization in LGP for treatment of brain metastatic lesions with the Perfexion can achieve significantly higher time efficiency without degrading treatment quality.« less
Improving IMRT delivery efficiency using intensity limits during inverse planning.
Coselmon, Martha M; Moran, Jean M; Radawski, Jeffrey D; Fraass, Benedick A
2005-05-01
Inverse planned intensity modulated radiotherapy (IMRT) fields can be highly modulated due to the large number of degrees of freedom involved in the inverse planning process. Additional modulation typically results in a more optimal plan, although the clinical rewards may be small or offset by additional delivery complexity and/or increased dose from transmission and leakage. Increasing modulation decreases delivery efficiency, and may lead to plans that are more sensitive to geometrical uncertainties. The purpose of this work is to assess the use of maximum intensity limits in inverse IMRT planning as a simple way to increase delivery efficiency without significantly affecting plan quality. Nine clinical cases (three each for brain, prostate, and head/neck) were used to evaluate advantages and disadvantages of limiting maximum intensity to increase delivery efficiency. IMRT plans were generated using in-house protocol-based constraints and objectives for the brain and head/neck, and RTOG 9406 dose volume objectives in the prostate. Each case was optimized at a series of maximum intensity ratios (the product of the maximum intensity and the number of beams divided by the prescribed dose to the target volume), and evaluated in terms of clinical metrics, dose-volume histograms, monitor units (MU) required per fraction (SMLC and DMLC delivery), and intensity map variation (a measure of the beam modulation). In each site tested, it was possible to reduce total monitor units by constraining the maximum allowed intensity without compromising the clinical acceptability of the plan. Monitor unit reductions up to 38% were observed for SMLC delivery, while reductions up to 29% were achieved for DMLC delivery. In general, complicated geometries saw a smaller reduction in monitor units for both delivery types, although DMLC delivery required significantly more monitor units in all cases. Constraining the maximum intensity in an inverse IMRT plan is a simple way to improve delivery efficiency without compromising plan objectives.
Censor, Yair; Unkelbach, Jan
2012-04-01
In this paper we look at the development of radiation therapy treatment planning from a mathematical point of view. Historically, planning for Intensity-Modulated Radiation Therapy (IMRT) has been considered as an inverse problem. We discuss first the two fundamental approaches that have been investigated to solve this inverse problem: Continuous analytic inversion techniques on one hand, and fully-discretized algebraic methods on the other hand. In the second part of the paper, we review another fundamental question which has been subject to debate from the beginning of IMRT until the present day: The rotation therapy approach versus fixed angle IMRT. This builds a bridge from historic work on IMRT planning to contemporary research in the context of Intensity-Modulated Arc Therapy (IMAT). Copyright © 2011 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Universal field matching in craniospinal irradiation by a background-dose gradient-optimized method.
Traneus, Erik; Bizzocchi, Nicola; Fellin, Francesco; Rombi, Barbara; Farace, Paolo
2018-01-01
The gradient-optimized methods are overcoming the traditional feathering methods to plan field junctions in craniospinal irradiation. In this note, a new gradient-optimized technique, based on the use of a background dose, is described. Treatment planning was performed by RayStation (RaySearch Laboratories, Stockholm, Sweden) on the CT scans of a pediatric patient. Both proton (by pencil beam scanning) and photon (by volumetric modulated arc therapy) treatments were planned with three isocenters. An 'in silico' ideal background dose was created first to cover the upper-spinal target and to produce a perfect dose gradient along the upper and lower junction regions. Using it as background, the cranial and the lower-spinal beams were planned by inverse optimization to obtain dose coverage of their relevant targets and of the junction volumes. Finally, the upper-spinal beam was inversely planned after removal of the background dose and with the previously optimized beams switched on. In both proton and photon plans, the optimized cranial and the lower-spinal beams produced a perfect linear gradient in the junction regions, complementary to that produced by the optimized upper-spinal beam. The final dose distributions showed a homogeneous coverage of the targets. Our simple technique allowed to obtain high-quality gradients in the junction region. Such technique universally works for photons as well as protons and could be applicable to the TPSs that allow to manage a background dose. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Zatsiorsky, Vladimir M.
2011-01-01
One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907
Evaluation of an artificial intelligence guided inverse planning system: clinical case study.
Yan, Hui; Yin, Fang-Fang; Willett, Christopher
2007-04-01
An artificial intelligence (AI) guided method for parameter adjustment of inverse planning was implemented on a commercial inverse treatment planning system. For evaluation purpose, four typical clinical cases were tested and the results from both plans achieved by automated and manual methods were compared. The procedure of parameter adjustment mainly consists of three major loops. Each loop is in charge of modifying parameters of one category, which is carried out by a specially customized fuzzy inference system. A physician prescribed multiple constraints for a selected volume were adopted to account for the tradeoff between prescription dose to the PTV and dose-volume constraints for critical organs. The searching process for an optimal parameter combination began with the first constraint, and proceeds to the next until a plan with acceptable dose was achieved. The initial setup of the plan parameters was the same for each case and was adjusted independently by both manual and automated methods. After the parameters of one category were updated, the intensity maps of all fields were re-optimized and the plan dose was subsequently re-calculated. When final plan arrived, the dose statistics were calculated from both plans and compared. For planned target volume (PTV), the dose for 95% volume is up to 10% higher in plans using the automated method than those using the manual method. For critical organs, an average decrease of the plan dose was achieved. However, the automated method cannot improve the plan dose for some critical organs due to limitations of the inference rules currently employed. For normal tissue, there was no significant difference between plan doses achieved by either automated or manual method. With the application of AI-guided method, the basic parameter adjustment task can be accomplished automatically and a comparable plan dose was achieved in comparison with that achieved by the manual method. Future improvements to incorporate case-specific inference rules are essential to fully automate the inverse planning process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan Chan Tseung, Hok Seum, E-mail: wanchantseung.hok@mayo.edu; Ma, Jiasen; Kreofsky, Cole R.
Purpose: Our aim is to demonstrate the feasibility of fast Monte Carlo (MC)–based inverse biological planning for the treatment of head and neck tumors in spot-scanning proton therapy. Methods and Materials: Recently, a fast and accurate graphics processor unit (GPU)–based MC simulation of proton transport was developed and used as the dose-calculation engine in a GPU-accelerated intensity modulated proton therapy (IMPT) optimizer. Besides dose, the MC can simultaneously score the dose-averaged linear energy transfer (LET{sub d}), which makes biological dose (BD) optimization possible. To convert from LET{sub d} to BD, a simple linear relation was assumed. By use of thismore » novel optimizer, inverse biological planning was applied to 4 patients, including 2 small and 1 large thyroid tumor targets, as well as 1 glioma case. To create these plans, constraints were placed to maintain the physical dose (PD) within 1.25 times the prescription while maximizing target BD. For comparison, conventional intensity modulated radiation therapy (IMRT) and IMPT plans were also created using Eclipse (Varian Medical Systems) in each case. The same critical-structure PD constraints were used for the IMRT, IMPT, and biologically optimized plans. The BD distributions for the IMPT plans were obtained through MC recalculations. Results: Compared with standard IMPT, the biologically optimal plans for patients with small tumor targets displayed a BD escalation that was around twice the PD increase. Dose sparing to critical structures was improved compared with both IMRT and IMPT. No significant BD increase could be achieved for the large thyroid tumor case and when the presence of critical structures mitigated the contribution of additional fields. The calculation of the biologically optimized plans can be completed in a clinically viable time (<30 minutes) on a small 24-GPU system. Conclusions: By exploiting GPU acceleration, MC-based, biologically optimized plans were created for small–tumor target patients. This optimizer will be used in an upcoming feasibility trial on LET{sub d} painting for radioresistant tumors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vandewouw, Marlee M., E-mail: marleev@mie.utoronto
Purpose: Continuous dose delivery in radiation therapy treatments has been shown to decrease total treatment time while improving the dose conformity and distribution homogeneity over the conventional step-and-shoot approach. The authors develop an inverse treatment planning method for Gamma Knife® Perfexion™ that continuously delivers dose along a path in the target. Methods: The authors’ method is comprised of two steps: find a path within the target, then solve a mixed integer optimization model to find the optimal collimator configurations and durations along the selected path. Robotic path-finding techniques, specifically, simultaneous localization and mapping (SLAM) using an extended Kalman filter, aremore » used to obtain a path that travels sufficiently close to selected isocentre locations. SLAM is novelly extended to explore a 3D, discrete environment, which is the target discretized into voxels. Further novel extensions are incorporated into the steering mechanism to account for target geometry. Results: The SLAM method was tested on seven clinical cases and compared to clinical, Hamiltonian path continuous delivery, and inverse step-and-shoot treatment plans. The SLAM approach improved dose metrics compared to the clinical plans and Hamiltonian path continuous delivery plans. Beam-on times improved over clinical plans, and had mixed performance compared to Hamiltonian path continuous plans. The SLAM method is also shown to be robust to path selection inaccuracies, isocentre selection, and dose distribution. Conclusions: The SLAM method for continuous delivery provides decreased total treatment time and increased treatment quality compared to both clinical and inverse step-and-shoot plans, and outperforms existing path methods in treatment quality. It also accounts for uncertainty in treatment planning by accommodating inaccuracies.« less
D'Amours, Michel; Pouliot, Jean; Dagnault, Anne; Verhaegen, Frank; Beaulieu, Luc
2011-12-01
Brachytherapy planning software relies on the Task Group report 43 dosimetry formalism. This formalism, based on a water approximation, neglects various heterogeneous materials present during treatment. Various studies have suggested that these heterogeneities should be taken into account to improve the treatment quality. The present study sought to demonstrate the feasibility of incorporating Monte Carlo (MC) dosimetry within an inverse planning algorithm to improve the dose conformity and increase the treatment quality. The method was based on precalculated dose kernels in full patient geometries, representing the dose distribution of a brachytherapy source at a single dwell position using MC simulations and the Geant4 toolkit. These dose kernels are used by the inverse planning by simulated annealing tool to produce a fast MC-based plan. A test was performed for an interstitial brachytherapy breast treatment using two different high-dose-rate brachytherapy sources: the microSelectron iridium-192 source and the electronic brachytherapy source Axxent operating at 50 kVp. A research version of the inverse planning by simulated annealing algorithm was combined with MC to provide a method to fully account for the heterogeneities in dose optimization, using the MC method. The effect of the water approximation was found to depend on photon energy, with greater dose attenuation for the lower energies of the Axxent source compared with iridium-192. For the latter, an underdosage of 5.1% for the dose received by 90% of the clinical target volume was found. A new method to optimize afterloading brachytherapy plans that uses MC dosimetric information was developed. Including computed tomography-based information in MC dosimetry in the inverse planning process was shown to take into account the full range of scatter and heterogeneity conditions. This led to significant dose differences compared with the Task Group report 43 approach for the Axxent source. Copyright © 2011 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chajon, Enrique; Dumas, Isabelle; Touleimat, Mahmoud B.Sc.
2007-11-01
Purpose: The purpose of this study was to evaluate the inverse planning simulated annealing (IPSA) software for the optimization of dose distribution in patients with cervix carcinoma treated with MRI-based pulsed-dose rate intracavitary brachytherapy. Methods and Materials: Thirty patients treated with a technique using a customized vaginal mold were selected. Dose-volume parameters obtained using the IPSA method were compared with the classic manual optimization method (MOM). Target volumes and organs at risk were delineated according to the Gynecological Brachytherapy Group/European Society for Therapeutic Radiology and Oncology recommendations. Because the pulsed dose rate program was based on clinical experience with lowmore » dose rate, dwell time values were required to be as homogeneous as possible. To achieve this goal, different modifications of the IPSA program were applied. Results: The first dose distribution calculated by the IPSA algorithm proposed a heterogeneous distribution of dwell time positions. The mean D90, D100, and V100 calculated with both methods did not differ significantly when the constraints were applied. For the bladder, doses calculated at the ICRU reference point derived from the MOM differed significantly from the doses calculated by the IPSA method (mean, 58.4 vs. 55 Gy respectively; p = 0.0001). For the rectum, the doses calculated at the ICRU reference point were also significantly lower with the IPSA method. Conclusions: The inverse planning method provided fast and automatic solutions for the optimization of dose distribution. However, the straightforward use of IPSA generated significant heterogeneity in dwell time values. Caution is therefore recommended in the use of inverse optimization tools with clinical relevance study of new dosimetric rules.« less
Comparison of anatomy-based, fluence-based and aperture-based treatment planning approaches for VMAT
NASA Astrophysics Data System (ADS)
Rao, Min; Cao, Daliang; Chen, Fan; Ye, Jinsong; Mehta, Vivek; Wong, Tony; Shepard, David
2010-11-01
Volumetric modulated arc therapy (VMAT) has the potential to reduce treatment times while producing comparable or improved dose distributions relative to fixed-field intensity-modulated radiation therapy. In order to take full advantage of the VMAT delivery technique, one must select a robust inverse planning tool. The purpose of this study was to evaluate the effectiveness and efficiency of VMAT planning techniques of three categories: anatomy-based, fluence-based and aperture-based inverse planning. We have compared these techniques in terms of the plan quality, planning efficiency and delivery efficiency. Fourteen patients were selected for this study including six head-and-neck (HN) cases, and two cases each of prostate, pancreas, lung and partial brain. For each case, three VMAT plans were created. The first VMAT plan was generated based on the anatomical geometry. In the Elekta ERGO++ treatment planning system (TPS), segments were generated based on the beam's eye view (BEV) of the target and the organs at risk. The segment shapes were then exported to Pinnacle3 TPS followed by segment weight optimization and final dose calculation. The second VMAT plan was generated by converting optimized fluence maps (calculated by the Pinnacle3 TPS) into deliverable arcs using an in-house arc sequencer. The third VMAT plan was generated using the Pinnacle3 SmartArc IMRT module which is an aperture-based optimization method. All VMAT plans were delivered using an Elekta Synergy linear accelerator and the plan comparisons were made in terms of plan quality and delivery efficiency. The results show that for cases of little or modest complexity such as prostate, pancreas, lung and brain, the anatomy-based approach provides similar target coverage and critical structure sparing, but less conformal dose distributions as compared to the other two approaches. For more complex HN cases, the anatomy-based approach is not able to provide clinically acceptable VMAT plans while highly conformal dose distributions were obtained using both aperture-based and fluence-based inverse planning techniques. The aperture-based approach provides improved dose conformity than the fluence-based technique in complex cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guthier, C; University Medical Center Mannheim, Mannheim; Harvard Medical School, Boston, MA
Purpose: Inverse treatment planning (ITP) for interstitial HDR brachytherapy of gynecologic cancers seeks to maximize coverage of the clinical target volumes (tumor and vagina) while respecting dose-volume-histogram related dosimetric measures (DMs) for organs at risk (OARs). Commercially available ITP tools do not support DM-based planning because it is computationally too expensive to solve. In this study we present a novel approach that allows fast ITP for gynecologic cancers based on DMs for the first time. Methods: This novel strategy is an optimization model based on a smooth DM-based objective function. The smooth approximation is achieved by utilizing a logistic functionmore » for the evaluation of DMs. The resulting nonconvex and constrained optimization problem is then optimized with a BFGS algorithm. The model was evaluated using the implant geometry extracted from 20 patient treatment plans under an IRB-approved retrospective study. For each plan, the final DMs were evaluated and compared to the original clinical plans. The CTVs were the contoured tumor volume and the contoured surface of the vagina. Statistical significance was evaluated with a one-sided paired Wilcoxon signed-rank test. Results: As did the clinical plans, all generated plans fulfilled the defined DMs for OARs. The proposed strategy showed a statistically significant improvement (p<0.001) in coverage of the tumor and vagina, with absolute improvements of related DMs of (6.9 +/− 7.9)% and (28.2 +/− 12.0)%, respectively. This was achieved with a statistically significant (p<0.01) decrease of the high-dose-related DM for the tumor. The runtime of the optimization was (2.3 +/− 2.0) seconds. Conclusion: We demonstrated using clinical data that our novel approach allows rapid DM-based optimization with improved coverage of CTVs with fewer hot spots. Being up to three orders of magnitude faster than the current clinical practice, the method dramatically shortens planning time.« less
NASA Astrophysics Data System (ADS)
Sanchez-Parcerisa, D.; Cortés-Giraldo, M. A.; Dolney, D.; Kondrla, M.; Fager, M.; Carabe, A.
2016-02-01
In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm-1) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.
Sanchez-Parcerisa, D; Cortés-Giraldo, M A; Dolney, D; Kondrla, M; Fager, M; Carabe, A
2016-02-21
In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm(-1)) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.
Derivative-free generation and interpolation of convex Pareto optimal IMRT plans
NASA Astrophysics Data System (ADS)
Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk
2006-12-01
In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorissen, BL; Giantsoudi, D; Unkelbach, J
Purpose: Cell survival experiments suggest that the relative biological effectiveness (RBE) of proton beams depends on linear energy transfer (LET), leading to higher RBE near the end of range. With intensity-modulated proton therapy (IMPT), multiple treatment plans that differ in the dose contribution per field may yield a similar physical dose distribution, but the RBE-weighted dose distribution may be disparate. RBE models currently do not have the required predictive power to be included in an optimization model due to the variations in experimental data. We propose an LET-based planning method that guides IMPT optimization models towards plans with reduced RBE-weightedmore » dose in surrounding organs at risk (OARs) compared to inverse planning based on physical dose alone. Methods: Optimization models for physical dose are extended with a term for dose times LET (doseLET). Monte Carlo code is used to generate the physical dose and doseLET distribution of each individual pencil beam. The method is demonstrated for an atypical meningioma patient where the target volume abuts the brainstem and partially overlaps with the optic nerve. Results: A reference plan optimized based on physical dose alone yields high doseLET values in parts of the brainstem and optic nerve. Minimizing doseLET in these critical structures as an additional planning goal reduces the risk of high RBE-weighted dose. The resulting treatment plan avoids the distal fall-off of the Bragg peaks for shaping the dose distribution in front of critical stuctures. The maximum dose in the OARs evaluated with RBE models from literature is reduced by 8–14\\% with our method compared to conventional planning. Conclusion: LET-based inverse planning for IMPT offers the ability to reduce the RBE-weighted dose in OARs without sacrificing target dose. This project was in part supported by NCI - U19 CA 21239.« less
NASA Astrophysics Data System (ADS)
Morávek, Zdenek; Rickhey, Mark; Hartmann, Matthias; Bogner, Ludwig
2009-08-01
Treatment plans for intensity-modulated proton therapy may be sensitive to some sources of uncertainty. One source is correlated with approximations of the algorithms applied in the treatment planning system and another one depends on how robust the optimization is with regard to intra-fractional tissue movements. The irradiated dose distribution may substantially deteriorate from the planning when systematic errors occur in the dose algorithm. This can influence proton ranges and lead to improper modeling of the Braggpeak degradation in heterogeneous structures or particle scatter or the nuclear interaction part. Additionally, systematic errors influence the optimization process, which leads to the convergence error. Uncertainties with regard to organ movements are related to the robustness of a chosen beam setup to tissue movements on irradiation. We present the inverse Monte Carlo treatment planning system IKO for protons (IKO-P), which tries to minimize the errors described above to a large extent. Additionally, robust planning is introduced by beam angle optimization according to an objective function penalizing paths representing strongly longitudinal and transversal tissue heterogeneities. The same score function is applied to optimize spot planning by the selection of a robust choice of spots. As spots can be positioned on different energy grids or on geometric grids with different space filling factors, a variety of grids were used to investigate the influence on the spot-weight distribution as a result of optimization. A tighter distribution of spot weights was assumed to result in a more robust plan with respect to movements. IKO-P is described in detail and demonstrated on a test case and a lung cancer case as well. Different options of spot planning and grid types are evaluated, yielding a superior plan quality with dose delivery to the spots from all beam directions over optimized beam directions. This option shows a tighter spot-weight distribution and should therefore be less sensitive to movements compared to optimized directions. But accepting a slight loss in plan quality, the latter choice could potentially improve robustness even further by accepting only spots from the most proper direction. The choice of a geometric grid instead of an energy grid for spot positioning has only a minor influence on the plan quality, at least for the investigated lung case.
Rapid inverse planning for pressure-driven drug infusions in the brain.
Rosenbluth, Kathryn H; Martin, Alastair J; Mittermeyer, Stephan; Eschermann, Jan; Dickinson, Peter J; Bankiewicz, Krystof S
2013-01-01
Infusing drugs directly into the brain is advantageous to oral or intravenous delivery for large molecules or drugs requiring high local concentrations with low off-target exposure. However, surgeons manually planning the cannula position for drug delivery in the brain face a challenging three-dimensional visualization task. This study presents an intuitive inverse-planning technique to identify the optimal placement that maximizes coverage of the target structure while minimizing the potential for leakage outside the target. The technique was retrospectively validated using intraoperative magnetic resonance imaging of infusions into the striatum of non-human primates and into a tumor in a canine model and applied prospectively to upcoming human clinical trials.
Poster — Thur Eve — 61: A new framework for MPERT plan optimization using MC-DAO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, M; Lloyd, S AM; Townson, R
2014-08-15
This work combines the inverse planning technique known as Direct Aperture Optimization (DAO) with Intensity Modulated Radiation Therapy (IMRT) and combined electron and photon therapy plans. In particular, determining conditions under which Modulated Photon/Electron Radiation Therapy (MPERT) produces better dose conformality and sparing of organs at risk than traditional IMRT plans is central to the project. Presented here are the materials and methods used to generate and manipulate the DAO procedure. Included is the introduction of a powerful Java-based toolkit, the Aperture-based Monte Carlo (MC) MPERT Optimizer (AMMO), that serves as a framework for optimization and provides streamlined access tomore » underlying particle transport packages. Comparison of the toolkit's dose calculations to those produced by the Eclipse TPS and the demonstration of a preliminary optimization are presented as first benchmarks. Excellent agreement is illustrated between the Eclipse TPS and AMMO for a 6MV photon field. The results of a simple optimization shows the functioning of the optimization framework, while significant research remains to characterize appropriate constraints.« less
Fast online Monte Carlo-based IMRT planning for the MRI linear accelerator
NASA Astrophysics Data System (ADS)
Bol, G. H.; Hissoiny, S.; Lagendijk, J. J. W.; Raaymakers, B. W.
2012-03-01
The MRI accelerator, a combination of a 6 MV linear accelerator with a 1.5 T MRI, facilitates continuous patient anatomy updates regarding translations, rotations and deformations of targets and organs at risk. Accounting for these demands high speed, online intensity-modulated radiotherapy (IMRT) re-optimization. In this paper, a fast IMRT optimization system is described which combines a GPU-based Monte Carlo dose calculation engine for online beamlet generation and a fast inverse dose optimization algorithm. Tightly conformal IMRT plans are generated for four phantom cases and two clinical cases (cervix and kidney) in the presence of the magnetic fields of 0 and 1.5 T. We show that for the presented cases the beamlet generation and optimization routines are fast enough for online IMRT planning. Furthermore, there is no influence of the magnetic field on plan quality and complexity, and equal optimization constraints at 0 and 1.5 T lead to almost identical dose distributions.
Falk, Marianne; Larsson, Tobias; Keall, Paul; Chul Cho, Byung; Aznar, Marianne; Korreman, Stine; Poulsen, Per; Munck Af Rosenschold, Per
2012-03-01
Real-time dynamic multileaf collimator (MLC) tracking for management of intrafraction tumor motion can be challenging for highly modulated beams, as the leaves need to travel far to adjust for target motion perpendicular to the leaf travel direction. The plan modulation can be reduced by using a leaf position constraint (LPC) that reduces the difference in the position of adjacent MLC leaves in the plan. The purpose of this study was to investigate the impact of the LPC on the quality of inversely optimized arc radiotherapy plans and the effect of the MLC motion pattern on the dosimetric accuracy of MLC tracking delivery. Specifically, the possibility of predicting the accuracy of MLC tracking delivery based on the plan modulation was investigated. Inversely optimized arc radiotherapy plans were created on CT-data of three lung cancer patients. For each case, five plans with a single 358° arc were generated with LPC priorities of 0 (no LPC), 0.25, 0.5, 0.75, and 1 (highest possible LPC), respectively. All the plans had a prescribed dose of 2 Gy × 30, used 6 MV, a maximum dose rate of 600 MU/min and a collimator angle of 45° or 315°. To quantify the plan modulation, an average adjacent leaf distance (ALD) was calculated by averaging the mean adjacent leaf distance for each control point. The linear relationship between the plan quality [i.e., the calculated dose distributions and the number of monitor units (MU)] and the LPC was investigated, and the linear regression coefficient as well as a two tailed confidence level of 95% was used in the evaluation. The effect of the plan modulation on the performance of MLC tracking was tested by delivering the plans to a cylindrical diode array phantom moving with sinusoidal motion in the superior-inferior direction with a peak-to-peak displacement of 2 cm and a cycle time of 6 s. The delivery was adjusted to the target motion using MLC tracking, guided in real-time by an infrared optical system. The dosimetric results were evaluated using gamma index evaluation with static target measurements as reference. The plan quality parameters did not depend significantly on the LPC (p ≥ 0.066), whereas the ALD depended significantly on the LPC (p < 0.001). The gamma index failure rate depended significantly on the ALD, weighted to the percentage of the beam delivered in each control point of the plan (ALD(w)) when MLC tracking was used (p < 0.001), but not for delivery without MLC tracking (p ≥ 0.342). The gamma index failure rate with the criteria of 2% and 2 mm was decreased from > 33.9% without MLC tracking to <31.4% (LPC 0) and <2.2% (LPC 1) with MLC tracking. The results indicate that the dosimetric robustness of MLC tracking delivery of an inversely optimized arc radiotherapy plan can be improved by incorporating leaf position constraints in the objective function without otherwise affecting the plan quality. The dosimetric robustness may be estimated prior to delivery by evaluating the ALD(w) of the plan.
A new Gamma Knife radiosurgery paradigm: Tomosurgery
NASA Astrophysics Data System (ADS)
Hu, Xiaoliang
The Leksell (Elekta, Stockholm, Sweden) Gamma Knife(TM) (LGK) is the worldwide standard-of-care for the radiosurgical treatment of a wide variety of intracranial lesions. The current LGK utilizes a step-and-shoot dose delivery mechanism where the centroid of each conformal radiation dose (i.e., the shot isocenter) requires repositioning the patient outside of the irradiation field. Perhaps the greatest challenge the LGK treatment team faces is planning the treatment of large and/or complexly shaped lesions that may be in close proximity to critical neural or vascular structures. The standard manual treatment planning approach is a time consuming procedure where additional time spent does not guarantee the identification of an increasingly optimal treatment plan. I propose a new radiosurgery paradigm which I refer to as "Tomosurgery". The Tomosurgery paradigm begins with the division of the target volume into a series of adjacent treatment slices, each with a carefully determined optimal thickness. The use of a continuously moving disk-shaped radiation shot that moves through the lesion in a raster-scanning pattern is expected to improve overall radiation dose conformality and homogeneity. The Tomosurgery treatment planning algorithm recruits a two-stage optimization strategy, which first plans each treatment slice as a simplified 2D problem and secondly optimally assembles the 2D treatment plans into the final 3D treatment plan. Tested on 11 clinical LGK cases, the automated inversely-generated Tomosurgery treatment plans performed as well or better than the neurosurgeon's manually created treatment plans across all criteria: (a) dose volume histograms, (b) dose homogeneity, (c) dose conformality, and (d) critical structure damage, where applicable. LGK Tomosurgery inverse treatment planning required much less time than standard of care, manual (i.e., forward) LGK treatment planning procedures. These results suggest that Tomosurgery might provide an improvement over the current LGK radiosurgery treatment planning software. As regards treatment delivery, a Tomosurgery Investigational Platform (TIP) is proposed to perform the physical validation of radiation dose delivery. The TIP should facilitate translation of the Tomosurgery paradigm to several other radiosurgery and/or radiotherapy devices without the need for expensive modification of commercial devices until the feasibility of delivering Tomosurgical treatment plans has been well established.
Kim, Yongbok; Modrick, Joseph M.; Pennington, Edward C.
2016-01-01
The objective of this work is to present commissioning procedures to clinically implement a three‐dimensional (3D), image‐based, treatment‐planning system (TPS) for high‐dose‐rate (HDR) brachytherapy (BT) for gynecological (GYN) cancer. The physical dimensions of the GYN applicators and their values in the virtual applicator library were varied by 0.4 mm of their nominal values. Reconstruction uncertainties of the titanium tandem and ovoids (T&O) were less than 0.4 mm on CT phantom studies and on average between 0.8‐1.0 mm on MRI when compared with X‐rays. In‐house software, HDRCalculator, was developed to check HDR plan parameters such as independently verifying active tandem or cylinder probe length and ovoid or cylinder size, source calibration and treatment date, and differences between average Point A dose and prescription dose. Dose‐volume histograms were validated using another independent TPS. Comprehensive procedures to commission volume optimization algorithms and process in 3D image‐based planning were presented. For the difference between line and volume optimizations, the average absolute differences as a percentage were 1.4% for total reference air KERMA (TRAK) and 1.1% for Point A dose. Volume optimization consistency tests between versions resulted in average absolute differences in 0.2% for TRAK and 0.9 s (0.2%) for total treatment time. The data revealed that the optimizer should run for at least 1 min in order to avoid more than 0.6% dwell time changes. For clinical GYN T&O cases, three different volume optimization techniques (graphical optimization, pure inverse planning, and hybrid inverse optimization) were investigated by comparing them against a conventional Point A technique. End‐to‐end testing was performed using a T&O phantom to ensure no errors or inconsistencies occurred from imaging through to planning and delivery. The proposed commissioning procedures provide a clinically safe implementation technique for 3D image‐based TPS for HDR BT for GYN cancer. PACS number(s): 87.55.D‐ PMID:27074463
MO-FG-BRA-08: Swarm Intelligence-Based Personalized Respiratory Gating in Lung SAbR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Modiri, A; Sabouri, P; Sawant, A
Purpose: Respiratory gating is widely deployed as a clinical motion-management strategy in lung radiotherapy. In conventional gating, the beam is turned on during a pre-determined phase window; typically, around end-exhalation. In this work, we challenge the notion that end-exhalation is always the optimal gating phase. Specifically, we use a swarm-intelligence-based, inverse planning approach to determine the optimal respiratory phase and MU for each beam with respect to (i) the state of the anatomy at each phase and (ii) the time spent in that state, estimated from long-term monitoring of the patient’s breathing motion. Methods: In a retrospective study of fivemore » lung cancer patients, we compared the dosimetric performance of our proposed personalized gating (PG) with that of conventional end-of-exhale gating (CEG) and a previously-developed, fully 4D-optimized plan (combined with MLC tracking delivery). For each patient, respiratory phase probabilities (indicative of the time duration of the phase) were estimated over 2 minutes from lung tumor motion traces recorded previously using the Synchrony system (Accuray Inc.). Based on this information, inverse planning optimization was performed to calculate the optimal respiratory gating phase and MU for each beam. To ensure practical deliverability, each PG beam was constrained to deliver the assigned MU over a time duration comparable to that of CEG delivery. Results: Maximum OAR sparing for the five patients achieved by the PG and the 4D plans compared to CEG plans was: Esophagus Dmax [PG:57%, 4D:37%], Heart Dmax [PG:71%, 4D:87%], Spinal cord Dmax [PG:18%, 4D:68%] and Lung V13 [PG:16%, 4D:31%]. While patients spent the most time in exhalation, the PG-optimization chose end-exhale only for 28% of beams. Conclusion: Our novel gating strategy achieved significant dosimetric improvements over conventional gating, and approached the upper limit represented by fully 4D optimized planning while being significantly simpler and more clinically translatable. This work was partially supported through research funding from National Institutes of Health (R01CA169102) and Varian Medical Systems, Palo Alto, CA, USA.« less
Liu, Han; Sintay, Benjamin; Pearman, Keith; Shang, Qingyang; Hayes, Lane; Maurer, Jacqueline; Vanderstraeten, Caroline; Wiant, David
2018-05-20
The photon optimization (PO) algorithm was recently released by Varian Medical Systems to improve volumetric modulated arc therapy (VMAT) optimization within Eclipse (Version 13.5). The purpose of this study is to compare the PO algorithm with its predecessor, progressive resolution optimizer (PRO) for lung SBRT and brain SRS treatments. A total of 30 patients were selected retrospectively. Previously, all the plans were generated with the PRO algorithm within Eclipse Version 13.6. In the new version of PO algorithm (Version 15), dynamic conformal arcs (DCA) were first conformed to the target, then VMAT inverse planning was performed to achieve the desired dose distributions. PTV coverages were forced to be identical for the same patient for a fair comparison. SBRT plan quality was assessed based on selected dose-volume parameters, including the conformity index, V 20 for lung, V 30 Gy for chest wall, and D 0.035 cc for other critical organs. SRS plan quality was evaluated based on the conformity index and normal tissue volumes encompassed by the 12 and 6 Gy isodose lines (V 12 and V 6 ). The modulation complexity score (MCS) was used to compare plan complexity of two algorithms. No statistically significant differences between the PRO and PO algorithms were found for any of the dosimetric parameters studied, which indicates both algorithms produce comparable plan quality. Significant improvements in the gamma passing rate (increased from 97.0% to 99.2% for SBRT and 96.1% to 98.4% for SRS), MCS (average increase of 0.15 for SBRT and 0.10 for SRS), and delivery efficiency (MU reduction of 29.8% for SBRT and 28.3% for SRS) were found for the PO algorithm. MCS showed a strong correlation with the gamma passing rate, and an inverse correlation with total MUs used. The PO algorithm offers comparable plan quality to the PRO, while minimizing MLC complexity, thereby improving the delivery efficiency and accuracy. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
TH-EF-BRB-04: 4π Dynamic Conformal Arc Therapy Dynamic Conformal Arc Therapy (DCAT) for SBRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiu, T; Long, T; Tian, Z.
2016-06-15
Purpose: To develop an efficient and effective trajectory optimization methodology for 4π dynamic conformal arc treatment (4π DCAT) with synchronized gantry and couch motion; and to investigate potential clinical benefits for stereotactic body radiation therapy (SBRT) to breast, lung, liver and spine tumors. Methods: The entire optimization framework for 4π DCAT inverse planning consists of two parts: 1) integer programming algorithm and 2) particle swarm optimization (PSO) algorithm. The integer programming is designed to find an optimal solution for arc delivery trajectory with both couch and gantry rotation, while PSO minimize a non-convex objective function based on the selected trajectorymore » and dose-volume constraints. In this study, control point interaction is explicitly taken into account. Beam trajectory was modeled as a series of control points connected together to form a deliverable path. With linear treatment planning objectives, a mixed-integer program (MIP) was formulated. Under mild assumptions, the MIP is tractable. Assigning monitor units to control points along the path can be integrated into the model and done by PSO. The developed 4π DCAT inverse planning strategy is evaluated on SBRT cases and compared to clinically treated plans. Results: The resultant dose distribution of this technique was evaluated between 3D conformal treatment plan generated by Pinnacle treatment planning system and 4π DCAT on a lung SBRT patient case. Both plans share the same scale of MU, 3038 and 2822 correspondingly to 3D conformal plan and 4π DCAT. The mean doses for most of OARs were greatly reduced at 32% (cord), 70% (esophagus), 2.8% (lung) and 42.4% (stomach). Conclusion: Initial results in this study show the proposed 4π DCAT treatment technique can achieve better OAR sparing and lower MUs, which indicates that the developed technique is promising for high dose SBRT to reduce the risk of secondary cancer.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez-Parcerisa, D; Carabe-Fernandez, A
2014-06-01
Purpose. Intensity-modulated proton therapy is usually implemented with multi-field optimization of pencil-beam scanning (PBS) proton fields. However, at the view of the experience with photon-IMRT, proton facilities equipped with double-scattering (DS) delivery and multi-leaf collimation (MLC) could produce highly conformal dose distributions (and possibly eliminate the need for patient-specific compensators) with a clever use of their MLC field shaping, provided that an optimal inverse TPS is developed. Methods. A prototype TPS was developed in MATLAB. The dose calculation process was based on a fluence-dose algorithm on an adaptive divergent grid. A database of dose kernels was precalculated in order tomore » allow for fast variations of the field range and modulation during optimization. The inverse planning process was based on the adaptive simulated annealing approach, with direct aperture optimization of the MLC leaves. A dosimetry study was performed on a phantom formed by three concentrical semicylinders separated by 5 mm, of which the inner-most and outer-most were regarded as organs at risk (OARs), and the middle one as the PTV. We chose a concave target (which is not treatable with conventional DS fields) to show the potential of our technique. The optimizer was configured to minimize the mean dose to the OARs while keeping a good coverage of the target. Results. The plan produced by the prototype TPS achieved a conformity index of 1.34, with the mean doses to the OARs below 78% of the prescribed dose. This Result is hardly achievable with traditional conformal DS technique with compensators, and it compares to what can be obtained with PBS. Conclusion. It is certainly feasible to produce IMPT fields with MLC passive scattering fields. With a fully developed treatment planning system, the produced plans can be superior to traditional DS plans in terms of plan conformity and dose to organs at risk.« less
Yao, Rui; Templeton, Alistair K; Liao, Yixiang; Turian, Julius V; Kiel, Krystyna D; Chu, James C H
2014-01-01
To validate an in-house optimization program that uses adaptive simulated annealing (ASA) and gradient descent (GD) algorithms and investigate features of physical dose and generalized equivalent uniform dose (gEUD)-based objective functions in high-dose-rate (HDR) brachytherapy for cervical cancer. Eight Syed/Neblett template-based cervical cancer HDR interstitial brachytherapy cases were used for this study. Brachytherapy treatment plans were first generated using inverse planning simulated annealing (IPSA). Using the same dwell positions designated in IPSA, plans were then optimized with both physical dose and gEUD-based objective functions, using both ASA and GD algorithms. Comparisons were made between plans both qualitatively and based on dose-volume parameters, evaluating each optimization method and objective function. A hybrid objective function was also designed and implemented in the in-house program. The ASA plans are higher on bladder V75% and D2cc (p=0.034) and lower on rectum V75% and D2cc (p=0.034) than the IPSA plans. The ASA and GD plans are not significantly different. The gEUD-based plans have higher homogeneity index (p=0.034), lower overdose index (p=0.005), and lower rectum gEUD and normal tissue complication probability (p=0.005) than the physical dose-based plans. The hybrid function can produce a plan with dosimetric parameters between the physical dose-based and gEUD-based plans. The optimized plans with the same objective value and dose-volume histogram could have different dose distributions. Our optimization program based on ASA and GD algorithms is flexible on objective functions, optimization parameters, and can generate optimized plans comparable with IPSA. Copyright © 2014 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Penfold, Scott; Zalas, Rafał; Casiraghi, Margherita; Brooke, Mark; Censor, Yair; Schulte, Reinhard
2017-05-01
A split feasibility formulation for the inverse problem of intensity-modulated radiation therapy treatment planning with dose-volume constraints included in the planning algorithm is presented. It involves a new type of sparsity constraint that enables the inclusion of a percentage-violation constraint in the model problem and its handling by continuous (as opposed to integer) methods. We propose an iterative algorithmic framework for solving such a problem by applying the feasibility-seeking CQ-algorithm of Byrne combined with the automatic relaxation method that uses cyclic projections. Detailed implementation instructions are furnished. Functionality of the algorithm was demonstrated through the creation of an intensity-modulated proton therapy plan for a simple 2D C-shaped geometry and also for a realistic base-of-skull chordoma treatment site. Monte Carlo simulations of proton pencil beams of varying energy were conducted to obtain dose distributions for the 2D test case. A research release of the Pinnacle 3 proton treatment planning system was used to extract pencil beam doses for a clinical base-of-skull chordoma case. In both cases the beamlet doses were calculated to satisfy dose-volume constraints according to our new algorithm. Examination of the dose-volume histograms following inverse planning with our algorithm demonstrated that it performed as intended. The application of our proposed algorithm to dose-volume constraint inverse planning was successfully demonstrated. Comparison with optimized dose distributions from the research release of the Pinnacle 3 treatment planning system showed the algorithm could achieve equivalent or superior results.
Methodologies in the modeling of combined chemo-radiation treatments
NASA Astrophysics Data System (ADS)
Grassberger, C.; Paganetti, H.
2016-11-01
The variety of treatment options for cancer patients has increased significantly in recent years. Not only do we combine radiation with surgery and chemotherapy, new therapeutic approaches such as immunotherapy and targeted therapies are starting to play a bigger role. Physics has made significant contributions to radiation therapy treatment planning and delivery. In particular, treatment plan optimization using inverse planning techniques has improved dose conformity considerably. Furthermore, medical physics is often the driving force behind tumor control and normal tissue complication modeling. While treatment optimization and outcome modeling does focus mainly on the effects of radiation, treatment modalities such as chemotherapy are treated independently or are even neglected entirely. This review summarizes the published efforts to model combined modality treatments combining radiation and chemotherapy. These models will play an increasing role in optimizing cancer therapy not only from a radiation and drug dosage standpoint, but also in terms of spatial and temporal optimization of treatment schedules.
NASA Astrophysics Data System (ADS)
Xiao, Ying; Michalski, Darek; Censor, Yair; Galvin, James M.
2004-07-01
The efficient delivery of intensity modulated radiation therapy (IMRT) depends on finding optimized beam intensity patterns that produce dose distributions, which meet given constraints for the tumour as well as any critical organs to be spared. Many optimization algorithms that are used for beamlet-based inverse planning are susceptible to large variations of neighbouring intensities. Accurately delivering an intensity pattern with a large number of extrema can prove impossible given the mechanical limitations of standard multileaf collimator (MLC) delivery systems. In this study, we apply Cimmino's simultaneous projection algorithm to the beamlet-based inverse planning problem, modelled mathematically as a system of linear inequalities. We show that using this method allows us to arrive at a smoother intensity pattern. Including nonlinear terms in the simultaneous projection algorithm to deal with dose-volume histogram (DVH) constraints does not compromise this property from our experimental observation. The smoothness properties are compared with those from other optimization algorithms which include simulated annealing and the gradient descent method. The simultaneous property of these algorithms is ideally suited to parallel computing technologies.
Stochastic Evolutionary Algorithms for Planning Robot Paths
NASA Technical Reports Server (NTRS)
Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard
2006-01-01
A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Syh, J; Syh, J; Patel, B
2014-06-15
Purpose: The multichannel cylindrical vaginal applicator is a variation of traditional single channel cylindrical vaginal applicator. The multichannel applicator has additional peripheral channels that provide more flexibility in the planning process. The dosimetric advantage is to reduce dose to adjacent organ at risk (OAR) such as bladder and rectum while maintaining target coverage with the dose optimization from additional channels. Methods: Vaginal HDR brachytherapy plans are all CT based. CT images were acquired in 2 mm thickness to keep integrity of cylinder contouring. The CTV of 5mm Rind with prescribed treatment length was reconstructed from 5mm expansion of inserted cylinder.more » The goal was 95% of CTV covered by 95% of prescribed dose in both single channel planning (SCP)and multichannel planning (MCP) before proceeding any further optimization for dose reduction to critical structures with emphasis on D2cc and V2Gy . Results: This study demonstrated noticeable dose reduction to OAR was apparent in multichannel plans. The D2cc of the rectum and bladder were showing the reduced dose for multichannel versus single channel. The V2Gy of the rectum was 93.72% and 83.79% (p=0.007) for single channel and multichannel respectively (Figure 1 and Table 1). To assure adequate coverage to target while reducing the dose to the OAR without any compromise is the main goal in using multichannel vaginal applicator in HDR brachytherapy. Conclusion: Multichannel plans were optimized using anatomical based inverse optimization algorithm of inverse planning simulation annealing. The optimization solution of the algorithm was to improve the clinical target volume dose coverage while reducing the dose to critical organs such as bladder, rectum and bowels. The comparison between SCP and MCP demonstrated MCP is superior to SCP where the dwell positions were based on geometric array only. It concluded that MCP is preferable and is able to provide certain features superior to SCP.« less
Choi, K; Suh, T; Xing, L
2012-06-01
Newly available flattening filter free (FFF) beam increases the dose rate by 3∼6 times at the central axis. In reality, even flattening filtered beam is not perfectly flat. In addition, the beam profiles across different fields may not have the same amplitude. The existing inverse planning formalism based on the total-variation of intensity (or fluence) map cannot consider these properties of beam profiles. The purpose of this work is to develop a novel dose optimization scheme with incorporation of the inherent beam profiles to maximally utilize the efficacy of arbitrary beam profiles while preserving the convexity of the optimization problem. To increase the accuracy of the problem formalism, we decompose the fluence map as an elementwise multiplication of the inherent beam profile and a normalized transmission map (NTM). Instead of attempting to optimize the fluence maps directly, we optimize the NTMs and beam profiles separately. A least-squares problem constrained by total-variation of NTMs is developed to derive the optimal fluence maps that balances the dose conformality and FFF beam delivery efficiency. With the resultant NTMs, we find beam profiles to renormalized NTMs. The proposed method iteratively optimizes and renormalizes NTMs in a closed loop manner. The advantage of the proposed method is demonstrated by using a head-neck case with flat beam profiles and a prostate case with non-flat beam profiles. The obtained NTMs achieve more conformal dose distribution while preserving piecewise constancy compared to the existing solution. The proposed formalism has two major advantages over the conventional inverse planning schemes: (1) it provides a unified framework for inverse planning with beams of arbitrary fluence profiles, including treatment with beams of mixed fluence profiles; (2) the use of total-variation constraints on NTMs allows us to optimally balance the dose confromality and deliverability for a given beam configuration. This project was supported in part by grants from the National Science Foundation (0854492), National Cancer Institute (1R01 CA104205), and Leading Foreign Research Institute Recruitment Program by the Korean Ministry of Education, Science and Technology (K20901000001-09E0100-00110). To the authors' best knowledgement, there is no conflict interest. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
van de Water, Steven; Albertini, Francesca; Weber, Damien C.; Heijmen, Ben J. M.; Hoogeman, Mischa S.; Lomax, Antony J.
2018-01-01
The aim of this study is to develop an anatomical robust optimization method for intensity-modulated proton therapy (IMPT) that accounts for interfraction variations in nasal cavity filling, and to compare it with conventional single-field uniform dose (SFUD) optimization and online plan adaptation. We included CT data of five patients with tumors in the sinonasal region. Using the planning CT, we generated for each patient 25 ‘synthetic’ CTs with varying nasal cavity filling. The robust optimization method available in our treatment planning system ‘Erasmus-iCycle’ was extended to also account for anatomical uncertainties by including (synthetic) CTs with varying patient anatomy as error scenarios in the inverse optimization. For each patient, we generated treatment plans using anatomical robust optimization and, for benchmarking, using SFUD optimization and online plan adaptation. Clinical target volume (CTV) and organ-at-risk (OAR) doses were assessed by recalculating the treatment plans on the synthetic CTs, evaluating dose distributions individually and accumulated over an entire fractionated 50 GyRBE treatment, assuming each synthetic CT to correspond to a 2 GyRBE fraction. Treatment plans were also evaluated using actual repeat CTs. Anatomical robust optimization resulted in adequate CTV doses (V95% ⩾ 98% and V107% ⩽ 2%) if at least three synthetic CTs were included in addition to the planning CT. These CTV requirements were also fulfilled for online plan adaptation, but not for the SFUD approach, even when applying a margin of 5 mm. Compared with anatomical robust optimization, OAR dose parameters for the accumulated dose distributions were on average 5.9 GyRBE (20%) higher when using SFUD optimization and on average 3.6 GyRBE (18%) lower for online plan adaptation. In conclusion, anatomical robust optimization effectively accounted for changes in nasal cavity filling during IMPT, providing substantially improved CTV and OAR doses compared with conventional SFUD optimization. OAR doses can be further reduced by using online plan adaptation.
Linear feasibility algorithms for treatment planning in interstitial photodynamic therapy
NASA Astrophysics Data System (ADS)
Rendon, A.; Beck, J. C.; Lilge, Lothar
2008-02-01
Interstitial Photodynamic therapy (IPDT) has been under intense investigation in recent years, with multiple clinical trials underway. This effort has demanded the development of optimization strategies that determine the best locations and output powers for light sources (cylindrical or point diffusers) to achieve an optimal light delivery. Furthermore, we have recently introduced cylindrical diffusers with customizable emission profiles, placing additional requirements on the optimization algorithms, particularly in terms of the stability of the inverse problem. Here, we present a general class of linear feasibility algorithms and their properties. Moreover, we compare two particular instances of these algorithms, which are been used in the context of IPDT: the Cimmino algorithm and a weighted gradient descent (WGD) algorithm. The algorithms were compared in terms of their convergence properties, the cost function they minimize in the infeasible case, their ability to regularize the inverse problem, and the resulting optimal light dose distributions. Our results show that the WGD algorithm overall performs slightly better than the Cimmino algorithm and that it converges to a minimizer of a clinically relevant cost function in the infeasible case. Interestingly however, treatment plans resulting from either algorithms were very similar in terms of the resulting fluence maps and dose volume histograms, once the diffuser powers adjusted to achieve equal prostate coverage.
3D conformal planning using low segment multi-criteria IMRT optimization
Khan, Fazal; Craft, David
2014-01-01
Purpose To evaluate automated multicriteria optimization (MCO) – designed for intensity modulated radiation therapy (IMRT), but invoked with limited segmentation – to efficiently produce high quality 3D conformal radiation therapy (3D-CRT) plans. Methods Ten patients previously planned with 3D-CRT to various disease sites (brain, breast, lung, abdomen, pelvis), were replanned with a low-segment inverse multicriteria optimized technique. The MCO-3D plans used the same beam geometry of the original 3D plans, but were limited to an energy of 6 MV. The MCO-3D plans were optimized using fluence-based MCO IMRT and then, after MCO navigation, segmented with a low number of segments. The 3D and MCO-3D plans were compared by evaluating mean dose for all structures, D95 (dose that 95% of the structure receives) and homogeneity indexes for targets, D1 and clinically appropriate dose volume objectives for individual organs at risk (OARs), monitor units (MUs), and physician preference. Results The MCO-3D plans reduced the OAR mean doses (41 out of a total of 45 OARs had a mean dose reduction, p<<0.01) and monitor units (seven out of ten plans have reduced MUs; the average reduction is 17%, p=0.08) while maintaining clinical standards on coverage and homogeneity of target volumes. All MCO-3D plans were preferred by physicians over their corresponding 3D plans. Conclusion High quality 3D plans can be produced using MCO-IMRT optimization, resulting in automated field-in-field type plans with good monitor unit efficiency. Adopting this technology in a clinic could improve plan quality, and streamline treatment plan production by utilizing a single system applicable to both IMRT and 3D planning. PMID:25413405
NASA Astrophysics Data System (ADS)
Seeley, Kaelyn; Cunha, J. Adam; Hong, Tae Min
2017-01-01
We discuss an improvement in brachytherapy--a prostate cancer treatment method that directly places radioactive seeds inside target cancerous regions--by optimizing the current standard for delivering dose. Currently, the seeds' spatiotemporal placement is determined by optimizing the dose based on a set of physical, user-defined constraints. One particular approach is the ``inverse planning'' algorithms that allow for tightly fit isodose lines around the target volumes in order to reduce dose to the patient's organs at risk. However, these dose distributions are typically computed assuming the same biological response to radiation for different types of tissues. In our work, we consider radiobiological parameters to account for the differences in the individual sensitivities and responses to radiation for tissues surrounding the target. Among the benefits are a more accurate toxicity rate and more coverage to target regions for planning high-dose-rate treatments as well as permanent implants.
Blanck, Oliver; Wang, Lei; Baus, Wolfgang; Grimm, Jimm; Lacornerie, Thomas; Nilsson, Joakim; Luchkovskyi, Sergii; Cano, Isabel Palazon; Shou, Zhenyu; Ayadi, Myriam; Treuer, Harald; Viard, Romain; Siebert, Frank-Andre; Chan, Mark K H; Hildebrandt, Guido; Dunst, Jürgen; Imhoff, Detlef; Wurster, Stefan; Wolff, Robert; Romanelli, Pantaleo; Lartigau, Eric; Semrau, Robert; Soltys, Scott G; Schweikard, Achim
2016-05-08
Stereotactic radiosurgery (SRS) is the accurate, conformal delivery of high-dose radiation to well-defined targets while minimizing normal structure doses via steep dose gradients. While inverse treatment planning (ITP) with computerized optimization algorithms are routine, many aspects of the planning process remain user-dependent. We performed an international, multi-institutional benchmark trial to study planning variability and to analyze preferable ITP practice for spinal robotic radiosurgery. 10 SRS treatment plans were generated for a complex-shaped spinal metastasis with 21 Gy in 3 fractions and tight constraints for spinal cord (V14Gy < 2 cc, V18Gy < 0.1 cc) and target (coverage > 95%). The resulting plans were rated on a scale from 1 to 4 (excellent-poor) in five categories (constraint compliance, optimization goals, low-dose regions, ITP complexity, and clinical acceptability) by a blinded review panel. Additionally, the plans were mathemati-cally rated based on plan indices (critical structure and target doses, conformity, monitor units, normal tissue complication probability, and treatment time) and compared to the human rankings. The treatment plans and the reviewers' rankings varied substantially among the participating centers. The average mean overall rank was 2.4 (1.2-4.0) and 8/10 plans were rated excellent in at least one category by at least one reviewer. The mathematical rankings agreed with the mean overall human rankings in 9/10 cases pointing toward the possibility for sole mathematical plan quality comparison. The final rankings revealed that a plan with a well-balanced trade-off among all planning objectives was preferred for treatment by most par-ticipants, reviewers, and the mathematical ranking system. Furthermore, this plan was generated with simple planning techniques. Our multi-institutional planning study found wide variability in ITP approaches for spinal robotic radiosurgery. The participants', reviewers', and mathematical match on preferable treatment plans and ITP techniques indicate that agreement on treatment planning and plan quality can be reached for spinal robotic radiosurgery.
Vanetti, Eugenio; Nicolini, Giorgia; Nord, Janne; Peltola, Jarkko; Clivio, Alessandro; Fogliata, Antonella; Cozzi, Luca
2011-11-01
The RapidArc volumetric modulated arc therapy (VMAT) planning process is based on a core engine, the so-called progressive resolution optimizer (PRO). This is the optimization algorithm used to determine the combination of field shapes, segment weights (with dose rate and gantry speed variations), which best approximate the desired dose distribution in the inverse planning problem. A study was performed to assess the behavior of two versions of PRO. These two versions mostly differ in the way continuous variables describing the modulated arc are sampled into discrete control points, in the planning efficiency and in the presence of some new features. The analysis aimed to assess (i) plan quality, (ii) technical delivery aspects, (iii) agreement between delivery and calculations, and (iv) planning efficiency of the two versions. RapidArc plans were generated for four groups of patients (five patients each): anal canal, advanced lung, head and neck, and multiple brain metastases and were designed to test different levels of planning complexity and anatomical features. Plans from optimization with PRO2 (first generation of RapidArc optimizer) were compared against PRO3 (second generation of the algorithm). Additional plans were optimized with PRO3 using new features: the jaw tracking, the intermediate dose and the air cavity correction options. Results showed that (i) plan quality was generally improved with PRO3 and, although not for all parameters, some of the scored indices showed a macroscopic improvement with PRO3. (ii) PRO3 optimization leads to simpler patterns of the dynamic parameters particularly for dose rate. (iii) No differences were observed between the two algorithms in terms of pretreatment quality assurance measurements and (iv) PRO3 optimization was generally faster, with a time reduction of a factor approximately 3.5 with respect to PRO2. These results indicate that PRO3 is either clinically beneficial or neutral in terms of dosimetric quality while it showed significant advantages in speed and technical aspects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bejarano Buele, A; Parsai, E
Purpose: The target volume for Whole Breast Irradiation (WBI) is dictated by location of tumor mass, breast tissue distribution, and involvement of lymph nodes. Dose coverage and Organs at Risk (OARs) sparing can be difficult to achieve in patients with unfavorable thoracic geometries. For these cases, inverse-planned and 3D-conformal prone treatments can be alternatives to traditional supine 3D-conformal plans. A dosimetric comparison can determine which of these techniques achieve optimal target coverage while sparing OARs. Methods: This study included simulation datasets for 8 patients, 5 of whom were simulated in both supine and prone positions. Positioning devices included breast boardsmore » and Vaclok bags for the supine position, and prone breast boards for the prone position. WBI 3-D conformal plans were created for patients simulated in both positions. Additional VMAT and IMRT WBI plans were made for all patients in the supine position. Results: Prone and supine 3D conformal plans had comparable PTV coverage. Prone 3D conformal plans received a significant 50% decrease to V20, V10, V5 and V30% for the ipsilateral lung in contrast to the supine plans. The heart also experienced a 10% decrease in maximum dose in the prone position, and V20, V10, V5 and V2 had significantly lower values than the supine plan. Supine IMRT and VMAT breast plans obtained comparable PTV coverage. The heart experienced a 10% decrease in maximum dose with inverse modulated plans when compared to the supine 3D conformal plan, while V20, V10, V5 and V2 showed higher values with inverse modulated plans than with supine 3D conformal plans. Conclusion: Prone 3D-conformal, and supine inverse planned treatments were generally superior in sparing OARs to supine plans with comparable PTV coverage. IMRT and VMAT plans offer sparing of OARs from high dose regions with an increase of irradiated volume in the low dose regions.« less
NASA Astrophysics Data System (ADS)
Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji
2002-06-01
This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright
NASA Astrophysics Data System (ADS)
Abate, A.; Pressello, M. C.; Benassi, M.; Strigari, L.
2009-12-01
The aim of this study was to evaluate the effectiveness and efficiency in inverse IMRT planning of one-step optimization with the step-and-shoot (SS) technique as compared to traditional two-step optimization using the sliding windows (SW) technique. The Pinnacle IMRT TPS allows both one-step and two-step approaches. The same beam setup for five head-and-neck tumor patients and dose-volume constraints were applied for all optimization methods. Two-step plans were produced converting the ideal fluence with or without a smoothing filter into the SW sequence. One-step plans, based on direct machine parameter optimization (DMPO), had the maximum number of segments per beam set at 8, 10, 12, producing a directly deliverable sequence. Moreover, the plans were generated whether a split-beam was used or not. Total monitor units (MUs), overall treatment time, cost function and dose-volume histograms (DVHs) were estimated for each plan. PTV conformality and homogeneity indexes and normal tissue complication probability (NTCP) that are the basis for improving therapeutic gain, as well as non-tumor integral dose (NTID), were evaluated. A two-sided t-test was used to compare quantitative variables. All plans showed similar target coverage. Compared to two-step SW optimization, the DMPO-SS plans resulted in lower MUs (20%), NTID (4%) as well as NTCP values. Differences of about 15-20% in the treatment delivery time were registered. DMPO generates less complex plans with identical PTV coverage, providing lower NTCP and NTID, which is expected to reduce the risk of secondary cancer. It is an effective and efficient method and, if available, it should be favored over the two-step IMRT planning.
Rivest-Hénault, David; Dowson, Nicholas; Greer, Peter B; Fripp, Jurgen; Dowling, Jason A
2015-07-01
CT-MR registration is a critical component of many radiation oncology protocols. In prostate external beam radiation therapy, it allows the propagation of MR-derived contours to reference CT images at the planning stage, and it enables dose mapping during dosimetry studies. The use of carefully registered CT-MR atlases allows the estimation of patient specific electron density maps from MRI scans, enabling MRI-alone radiation therapy planning and treatment adaptation. In all cases, the precision and accuracy achieved by registration influences the quality of the entire process. Most current registration algorithms do not robustly generalize and lack inverse-consistency, increasing the risk of human error and acting as a source of bias in studies where information is propagated in a particular direction, e.g. CT to MR or vice versa. In MRI-based treatment planning where both CT and MR scans serve as spatial references, inverse-consistency is critical, if under-acknowledged. A robust, inverse-consistent, rigid/affine registration algorithm that is well suited to CT-MR alignment in prostate radiation therapy is presented. The presented method is based on a robust block-matching optimization process that utilises a half-way space definition to maintain inverse-consistency. Inverse-consistency substantially reduces the influence of the order of input images, simplifying analysis, and increasing robustness. An open source implementation is available online at http://aehrc.github.io/Mirorr/. Experimental results on a challenging 35 CT-MR pelvis dataset demonstrate that the proposed method is more accurate than other popular registration packages and is at least as accurate as the state of the art, while being more robust and having an order of magnitude higher inverse-consistency than competing approaches. The presented results demonstrate that the proposed registration algorithm is readily applicable to prostate radiation therapy planning. Copyright © 2015. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, M; Fontenot, J; Heins, D
2016-06-15
Purpose: To evaluate two dose optimization strategies for maintaining target volume coverage of inversely-planned post mastectomy radiotherapy (PMRT) plans during patient motion. Methods: Five patients previously treated with VMAT for PMRT at our clinical were randomly selected for this study. For each patient, two plan optimization strategies were compared. Plan 1 was optimized to a volume that included the physician’s planning target volume (PTV) plus an expansion up to 0.3 cm from the bolus surface. Plan 2 was optimized to the PTV plus an expansion up to 0.3 cm from the patient surface (i.e., not extending into the bolus). VMATmore » plans were optimized to deliver 95% of the prescription to 95% of the PTV while sparing organs at risk based on clinical dose limits. PTV coverage was then evaluated following the simulation of patient shifts by 1.0 cm in the anterior and posterior directions using the treatment planning system. Results: Posterior patient shifts produced a difference in D95% of around 11% in both planning approaches from the non-shifted dose distributions. Coverage of the medial and lateral borders of the evaluation volume was reduced in both the posteriorly shifted plans (Plan 1 and Plan 2). Anterior patient shifts affected Plan 2 more than Plan 1 with a difference in D95% of 1% for Plan 1 versus 6% for Plan 2 from the non-shifted dose distributions. The least variation in PTV dose homogeneity for both shifts was obtained with Plan 1. However, all posteriorly shifted plans failed to deliver 95% of the prescription to 95% of the PTV. Whereas, only a few anteriorly shifted plans failed this criteria. Conclusion: The results of this study suggest both planning volume methods are sensitive to patient motion, but that a PTV extended into a bolus volume is slightly more robust for anterior patient shifts.« less
Whitaker, May
2016-01-01
Purpose Inverse planning simulated annealing (IPSA) optimized brachytherapy treatment plans are characterized with large isolated dwell times at the first or last dwell position of each catheter. The potential of catheter shifts relative to the target and organs at risk in these plans may lead to a more significant change in delivered dose to the volumes of interest relative to plans with more uniform dwell times. Material and methods This study aims to determine if the Nucletron Oncentra dwell time deviation constraint (DTDC) parameter can be optimized to improve the robustness of high-dose-rate (HDR) prostate brachytherapy plans to catheter displacements. A set of 10 clinically acceptable prostate plans were re-optimized with a DTDC parameter of 0 and 0.4. For each plan, catheter displacements of 3, 7, and 14 mm were retrospectively applied and the change in dose volume histogram (DVH) indices and conformity indices analyzed. Results The robustness of clinically acceptable prostate plans to catheter displacements in the caudal direction was found to be dependent on the DTDC parameter. A DTDC value of 0 improves the robustness of planning target volume (PTV) coverage to catheter displacements, whereas a DTDC value of 0.4 improves the robustness of the plans to changes in hotspots. Conclusions The results indicate that if used in conjunction with a pre-treatment catheter displacement correction protocol and a tolerance of 3 mm, a DTDC value of 0.4 may produce clinically superior plans. However, the effect of the DTDC parameter in plan robustness was not observed to be as strong as initially suspected. PMID:27504129
Poder, Joel; Whitaker, May
2016-06-01
Inverse planning simulated annealing (IPSA) optimized brachytherapy treatment plans are characterized with large isolated dwell times at the first or last dwell position of each catheter. The potential of catheter shifts relative to the target and organs at risk in these plans may lead to a more significant change in delivered dose to the volumes of interest relative to plans with more uniform dwell times. This study aims to determine if the Nucletron Oncentra dwell time deviation constraint (DTDC) parameter can be optimized to improve the robustness of high-dose-rate (HDR) prostate brachytherapy plans to catheter displacements. A set of 10 clinically acceptable prostate plans were re-optimized with a DTDC parameter of 0 and 0.4. For each plan, catheter displacements of 3, 7, and 14 mm were retrospectively applied and the change in dose volume histogram (DVH) indices and conformity indices analyzed. The robustness of clinically acceptable prostate plans to catheter displacements in the caudal direction was found to be dependent on the DTDC parameter. A DTDC value of 0 improves the robustness of planning target volume (PTV) coverage to catheter displacements, whereas a DTDC value of 0.4 improves the robustness of the plans to changes in hotspots. The results indicate that if used in conjunction with a pre-treatment catheter displacement correction protocol and a tolerance of 3 mm, a DTDC value of 0.4 may produce clinically superior plans. However, the effect of the DTDC parameter in plan robustness was not observed to be as strong as initially suspected.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGarry, Conor K., E-mail: conor.mcgarry@belfasttrust.hscni.net; Bokrantz, Rasmus; RaySearch Laboratories, Stockholm
2014-10-01
Efficacy of inverse planning is becoming increasingly important for advanced radiotherapy techniques. This study’s aims were to validate multicriteria optimization (MCO) in RayStation (v2.4, RaySearch Laboratories, Sweden) against standard intensity-modulated radiation therapy (IMRT) optimization in Oncentra (v4.1, Nucletron BV, the Netherlands) and characterize dose differences due to conversion of navigated MCO plans into deliverable multileaf collimator apertures. Step-and-shoot IMRT plans were created for 10 patients with localized prostate cancer using both standard optimization and MCO. Acceptable standard IMRT plans with minimal average rectal dose were chosen for comparison with deliverable MCO plans. The trade-off was, for the MCO plans, managedmore » through a user interface that permits continuous navigation between fluence-based plans. Navigated MCO plans were made deliverable at incremental steps along a trajectory between maximal target homogeneity and maximal rectal sparing. Dosimetric differences between navigated and deliverable MCO plans were also quantified. MCO plans, chosen as acceptable under navigated and deliverable conditions resulted in similar rectal sparing compared with standard optimization (33.7 ± 1.8 Gy vs 35.5 ± 4.2 Gy, p = 0.117). The dose differences between navigated and deliverable MCO plans increased as higher priority was placed on rectal avoidance. If the best possible deliverable MCO was chosen, a significant reduction in rectal dose was observed in comparison with standard optimization (30.6 ± 1.4 Gy vs 35.5 ± 4.2 Gy, p = 0.047). Improvements were, however, to some extent, at the expense of less conformal dose distributions, which resulted in significantly higher doses to the bladder for 2 of the 3 tolerance levels. In conclusion, similar IMRT plans can be created for patients with prostate cancer using MCO compared with standard optimization. Limitations exist within MCO regarding conversion of navigated plans to deliverable apertures, particularly for plans that emphasize avoidance of critical structures. Minimizing these differences would result in better quality treatments for patients with prostate cancer who were treated with radiotherapy using MCO plans.« less
Simultaneous optimization of photons and electrons for mixed beam radiotherapy
NASA Astrophysics Data System (ADS)
Mueller, S.; Fix, M. K.; Joosten, A.; Henzen, D.; Frei, D.; Volken, W.; Kueng, R.; Aebersold, D. M.; Stampanoni, M. F. M.; Manser, P.
2017-07-01
The aim of this work is to develop and investigate an inverse treatment planning process (TPP) for mixed beam radiotherapy (MBRT) capable of performing simultaneous optimization of photon and electron apertures. A simulated annealing based direct aperture optimization (DAO) is implemented to perform simultaneous optimization of photon and electron apertures, both shaped with the photon multileaf collimator (pMLC). Validated beam models are used as input for Monte Carlo dose calculations. Consideration of photon pMLC transmission during DAO and a weight re-optimization of the apertures after deliverable dose calculation are utilized to efficiently reduce the differences between optimized and deliverable dose distributions. The TPP for MBRT is evaluated for an academic situation with a superficial and an enlarged PTV in the depth, a left chest wall case including the internal mammary chain and a squamous cell carcinoma case. Deliverable dose distributions of MBRT plans are compared to those of modulated electron radiotherapy (MERT), photon IMRT and if available to those of clinical VMAT plans. The generated MBRT plans dosimetrically outperform the MERT, photon IMRT and VMAT plans for all investigated situations. For the clinical cases of the left chest wall and the squamous cell carcinoma, the MBRT plans cover the PTV similarly or more homogeneously than the VMAT plans, while OARs are spared considerably better with average reductions of the mean dose to parallel OARs and D 2% to serial OARs by 54% and 26%, respectively. Moreover, the low dose bath expressed as V 10% to normal tissue is substantially reduced by up to 45% compared to the VMAT plans. A TPP for MBRT including simultaneous optimization is successfully implemented and the dosimetric superiority of MBRT plans over MERT, photon IMRT and VMAT plans is demonstrated for academic and clinical situations including superficial targets with and without deep-seated part.
Simultaneous optimization of photons and electrons for mixed beam radiotherapy.
Mueller, S; Fix, M K; Joosten, A; Henzen, D; Frei, D; Volken, W; Kueng, R; Aebersold, D M; Stampanoni, M F M; Manser, P
2017-06-26
The aim of this work is to develop and investigate an inverse treatment planning process (TPP) for mixed beam radiotherapy (MBRT) capable of performing simultaneous optimization of photon and electron apertures. A simulated annealing based direct aperture optimization (DAO) is implemented to perform simultaneous optimization of photon and electron apertures, both shaped with the photon multileaf collimator (pMLC). Validated beam models are used as input for Monte Carlo dose calculations. Consideration of photon pMLC transmission during DAO and a weight re-optimization of the apertures after deliverable dose calculation are utilized to efficiently reduce the differences between optimized and deliverable dose distributions. The TPP for MBRT is evaluated for an academic situation with a superficial and an enlarged PTV in the depth, a left chest wall case including the internal mammary chain and a squamous cell carcinoma case. Deliverable dose distributions of MBRT plans are compared to those of modulated electron radiotherapy (MERT), photon IMRT and if available to those of clinical VMAT plans. The generated MBRT plans dosimetrically outperform the MERT, photon IMRT and VMAT plans for all investigated situations. For the clinical cases of the left chest wall and the squamous cell carcinoma, the MBRT plans cover the PTV similarly or more homogeneously than the VMAT plans, while OARs are spared considerably better with average reductions of the mean dose to parallel OARs and D 2% to serial OARs by 54% and 26%, respectively. Moreover, the low dose bath expressed as V 10% to normal tissue is substantially reduced by up to 45% compared to the VMAT plans. A TPP for MBRT including simultaneous optimization is successfully implemented and the dosimetric superiority of MBRT plans over MERT, photon IMRT and VMAT plans is demonstrated for academic and clinical situations including superficial targets with and without deep-seated part.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Syh, J; Syh, J; Patel, B
2015-06-15
Purpose: The multichannel cylindrical applicator has a distinctive modification of the traditional single channel cylindrical applicator. The novel multichannel applicator has additional peripheral channels that provide more flexibility both in treatment planning process and outcomes. To protect by reducing doses to adjacent organ at risk (OAR) while maintaining target coverage with inverse plan optimization are the goals for such novel Brachytherapy device. Through a series of comparison and analysis of reults in more than forty patients who received HDR Brachytherapy using multichannel vaginal applicator, this procedure has been implemented in our institution. Methods: Multichannel planning was CT image based. Themore » CTV of 5mm vaginal cuff rind with prescribed length was well reconstructed as well as bladder and rectum. At least D95 of CTV coverage is 95% of prescribed dose. Multichannel inverse plan optimization algorithm not only shapes target dose cloud but set dose avoids to OAR’s exclusively. The doses of D2cc, D5cc and D5; volume of V2Gy in OAR’s were selected to compare with single channel results when sole central channel is only possibility. Results: Study demonstrates plan superiorly in OAR’s doe reduction in multi-channel plan. The D2cc of the rectum and bladder were showing a little lower for multichannel vs. single channel. The V2Gy of the rectum was 93.72% vs. 83.79% (p=0.007) for single channel vs. multichannel respectively. Absolute reduced mean dose of D5 by multichannel was 17 cGy (s.d.=6.4) and 44 cGy (s.d.=15.2) in bladder and rectum respectively. Conclusion: The optimization solution in multichannel was to maintain D95 CTV coverage while reducing the dose to OAR’s. Dosimetric advantage in sparing critical organs by using a multichannel applicator in HDR Brachytherapy treatment of the vaginal cuff is so promising and has been implemented clinically.« less
Wu, V W C; Sham, J S T; Kwong, D L W
2004-07-01
The aim of this study is to demonstrate the use of inverse planning in three-dimensional conformal radiation therapy (3DCRT) of oesophageal cancer patients and to evaluate its dosimetric results by comparing them with forward planning of 3DCRT and inverse planning of intensity-modulated radiotherapy (IMRT). For each of the 15 oesophageal cancer patients in this study, the forward 3DCRT, inverse 3DCRT and inverse IMRT plans were produced using the FOCUS treatment planning system. The dosimetric results and the planner's time associated with each of the treatment plans were recorded for comparison. The inverse 3DCRT plans showed similar dosimetric results to the forward plans in the planning target volume (PTV) and organs at risk (OARs). However, they were inferior to that of the IMRT plans in terms of tumour control probability and target dose conformity. Furthermore, the inverse 3DCRT plans were less effective in reducing the percentage lung volume receiving a dose below 25 Gy when compared with the IMRT plans. The inverse 3DCRT plans delivered a similar heart dose as in the forward plans, but higher dose than the IMRT plans. The inverse 3DCRT plans significantly reduced the operator's time by 2.5 fold relative to the forward plans. In conclusion, inverse planning for 3DCRT is a reasonable alternative to the forward planning for oesophageal cancer patients with reduction of the operator's time. However, IMRT has the better potential to allow further dose escalation and improvement of tumour control.
Tagliabue, Michele; Pedrocchi, Alessandra; Pozzo, Thierry; Ferrigno, Giancarlo
2008-01-01
In spite of the complexity of human motor behavior, difficulties in mathematical modeling have restricted to rather simple movements attempts to identify the motor planning criterion used by the central nervous system. This paper presents a novel-simulation technique able to predict the "desired trajectory" corresponding to a wide range of kinematic and kinetic optimality criteria for tasks involving many degrees of freedom and the coordination between goal achievement and balance maintenance. Employment of proper time discretization, inverse dynamic methods and constrained optimization technique are combined. The application of this simulator to a planar whole body pointing movement shows its effectiveness in managing system nonlinearities and instability as well as in ensuring the anatomo-physiological feasibility of predicted motor plans. In addition, the simulator's capability to simultaneously optimize competing movement aspects represents an interesting opportunity for the motor control community, in which the coexistence of several controlled variables has been hypothesized.
Wang, Lei; Baus, Wolfgang; Grimm, Jimm; Lacornerie, Thomas; Nilsson, Joakim; Luchkovskyi, Sergii; Cano, Isabel Palazon; Shou, Zhenyu; Ayadi, Myriam; Treuer, Harald; Viard, Romain; Siebert, Frank‐Andre; Chan, Mark K.H.; Hildebrandt, Guido; Dunst, Jürgen; Imhoff, Detlef; Wurster, Stefan; Wolff, Robert; Romanelli, Pantaleo; Lartigau, Eric; Semrau, Robert; Soltys, Scott G.; Schweikard, Achim
2016-01-01
Stereotactic radiosurgery (SRS) is the accurate, conformal delivery of high‐dose radiation to well‐defined targets while minimizing normal structure doses via steep dose gradients. While inverse treatment planning (ITP) with computerized optimization algorithms are routine, many aspects of the planning process remain user‐dependent. We performed an international, multi‐institutional benchmark trial to study planning variability and to analyze preferable ITP practice for spinal robotic radiosurgery. 10 SRS treatment plans were generated for a complex‐shaped spinal metastasis with 21 Gy in 3 fractions and tight constraints for spinal cord (V14Gy<2 cc, V18Gy<0.1 cc) and target (coverage >95%). The resulting plans were rated on a scale from 1 to 4 (excellent‐poor) in five categories (constraint compliance, optimization goals, low‐dose regions, ITP complexity, and clinical acceptability) by a blinded review panel. Additionally, the plans were mathematically rated based on plan indices (critical structure and target doses, conformity, monitor units, normal tissue complication probability, and treatment time) and compared to the human rankings. The treatment plans and the reviewers' rankings varied substantially among the participating centers. The average mean overall rank was 2.4 (1.2‐4.0) and 8/10 plans were rated excellent in at least one category by at least one reviewer. The mathematical rankings agreed with the mean overall human rankings in 9/10 cases pointing toward the possibility for sole mathematical plan quality comparison. The final rankings revealed that a plan with a well‐balanced trade‐off among all planning objectives was preferred for treatment by most participants, reviewers, and the mathematical ranking system. Furthermore, this plan was generated with simple planning techniques. Our multi‐institutional planning study found wide variability in ITP approaches for spinal robotic radiosurgery. The participants', reviewers', and mathematical match on preferable treatment plans and ITP techniques indicate that agreement on treatment planning and plan quality can be reached for spinal robotic radiosurgery. PACS number(s): 87.55.de PMID:27167291
Evidence for composite cost functions in arm movement planning: an inverse optimal control approach.
Berret, Bastien; Chiovetto, Enrico; Nori, Francesco; Pozzo, Thierry
2011-10-01
An important issue in motor control is understanding the basic principles underlying the accomplishment of natural movements. According to optimal control theory, the problem can be stated in these terms: what cost function do we optimize to coordinate the many more degrees of freedom than necessary to fulfill a specific motor goal? This question has not received a final answer yet, since what is optimized partly depends on the requirements of the task. Many cost functions were proposed in the past, and most of them were found to be in agreement with experimental data. Therefore, the actual principles on which the brain relies to achieve a certain motor behavior are still unclear. Existing results might suggest that movements are not the results of the minimization of single but rather of composite cost functions. In order to better clarify this last point, we consider an innovative experimental paradigm characterized by arm reaching with target redundancy. Within this framework, we make use of an inverse optimal control technique to automatically infer the (combination of) optimality criteria that best fit the experimental data. Results show that the subjects exhibited a consistent behavior during each experimental condition, even though the target point was not prescribed in advance. Inverse and direct optimal control together reveal that the average arm trajectories were best replicated when optimizing the combination of two cost functions, nominally a mix between the absolute work of torques and the integrated squared joint acceleration. Our results thus support the cost combination hypothesis and demonstrate that the recorded movements were closely linked to the combination of two complementary functions related to mechanical energy expenditure and joint-level smoothness.
NASA Astrophysics Data System (ADS)
Liu, Hongcheng; Dong, Peng; Xing, Lei
2017-08-01
Traditional inverse planning relies on the use of weighting factors to balance the conflicting requirements of different structures. Manual trial-and-error determination of weighting factors has long been recognized as a time-consuming part of treatment planning. The purpose of this work is to develop an inverse planning framework that parameterizes the dosimetric tradeoff among the structures with physically meaningful quantities to simplify the search for clinically sensible plans. In this formalism, instead of using weighting factors, the permissible variation range of the prescription dose or dose volume histogram (DVH) of the involved structures are used to characterize the ‘importance’ of the structures. The inverse planning is then formulated into a convex feasibility problem, called the dosimetric variation-controlled model (DVCM), whose goal is to generate plans with dosimetric or DVH variations of the structures consistent with the pre-specified values. For simplicity, the dosimetric variation range for a structure is extracted from a library of previous cases which possess similar anatomy and prescription. A two-phase procedure (TPP) is designed to solve the model. The first phase identifies a physically feasible plan to satisfy the prescribed dosimetric variation, and the second phase automatically improves the plan in case there is room for further improvement. The proposed technique is applied to plan two prostate cases and two head-and-neck cases and the results are compared with those obtained using a conventional CVaR approach and with a moment-based optimization scheme. Our results show that the strategy is able to generate clinically sensible plans with little trial and error. In all cases, the TPP generates a very competitive plan as compared to those obtained using the alternative approaches. Particularly, in the planning of one of the head-and-neck cases, the TPP leads to a non-trivial improvement in the resultant dose distribution—the fractional volumes receiving a dose above 20 Gy for the spinal cord are reduced by more than 40% when compared to the alternative schemes, while maintaining the same PTV coverage. With physically more meaningful modeling of the inter-structural tradeoff, the reported technique enables us to substantially reduce the need for trial-and-error adjustment of the model parameters. The new formalism also opens new opportunities for incorporating prior knowledge to facilitate the treatment planning process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, T; Zhou, L; Li, Y
Purpose: For intensity modulated radiotherapy, the plan optimization is time consuming with difficulties of selecting objectives and constraints, and their relative weights. A fast and automatic multi-objective optimization algorithm with abilities to predict optimal constraints and manager their trade-offs can help to solve this problem. Our purpose is to develop such a framework and algorithm for a general inverse planning. Methods: There are three main components contained in this proposed multi-objective optimization framework: prediction of initial dosimetric constraints, further adjustment of constraints and plan optimization. We firstly use our previously developed in-house geometry-dosimetry correlation model to predict the optimal patient-specificmore » dosimetric endpoints, and treat them as initial dosimetric constraints. Secondly, we build an endpoint(organ) priority list and a constraint adjustment rule to repeatedly tune these constraints from their initial values, until every single endpoint has no room for further improvement. Lastly, we implement a voxel-independent based FMO algorithm for optimization. During the optimization, a model for tuning these voxel weighting factors respecting to constraints is created. For framework and algorithm evaluation, we randomly selected 20 IMRT prostate cases from the clinic and compared them with our automatic generated plans, in both the efficiency and plan quality. Results: For each evaluated plan, the proposed multi-objective framework could run fluently and automatically. The voxel weighting factor iteration time varied from 10 to 30 under an updated constraint, and the constraint tuning time varied from 20 to 30 for every case until no more stricter constraint is allowed. The average total costing time for the whole optimization procedure is ∼30mins. By comparing the DVHs, better OAR dose sparing could be observed in automatic generated plan, for 13 out of the 20 cases, while others are with competitive results. Conclusion: We have successfully developed a fast and automatic multi-objective optimization for intensity modulated radiotherapy. This work is supported by the National Natural Science Foundation of China (No: 81571771)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grelewicz, Z; Wiersma, R
Purpose: Real-time fluoroscopy may allow for improved patient positioning and tumor tracking, particularly in the treatment of lung tumors. In order to mitigate the effects of the imaging dose, previous studies have demonstrated the effect of including both imaging dose and imaging constraints into the inverse treatment planning object function. That method of combined MV+kV optimization may Result in plans with treatment beams chosen to allow for more gentle imaging beam-on times. Direct-aperture optimization (DAO) is also known to produce treatment plans with fluence maps more conducive to lower beam-on times. Therefore, in this work we demonstrate the feasibility ofmore » a combination of DAO and MV+kV optimization for further optimized real-time kV imaging. Methods: Therapeutic and imaging beams were modeled in the EGSnrc Monte Carlo environment, and applied to a patient model for a previously treated lung patient to provide dose influence matrices from DOSXYZnrc. An MV + kV IMRT DAO treatment planning system was developed to compare DAO treatment plans with and without MV+kV optimization. The objective function was optimized using simulated annealing. In order to allow for comparisons between different cases of the stochastically optimized plans, the optimization was repeated twenty times. Results: Across twenty optimizations, combined MV+kV IMRT resulted in an average of 12.8% reduction in peak skin dose. Both non-optimized and MV+kV optimized imaging beams delivered, on average, mean dose of approximately 1 cGy per fraction to the target, with peak doses to target of approximately 6 cGy per fraction. Conclusion: When using DAO, MV+kV optimization is shown to Result in improvements to plan quality in terms of skin dose, when compared to the case of MV optimization with non-optimized kV imaging. The combination of DAO and MV+kV optimization may allow for real-time imaging without excessive imaging dose. Financial support for the work has been provided in part by NIH Grant T32 EB002103, ACS RSG-13-313-01-CCE, and NIH S10 RR021039 and P30 CA14599 grants. The contents of this submission do not necessarily represent the official views of any of the supporting organizations.« less
Qin, Nan; Shen, Chenyang; Tsai, Min-Yu; Pinto, Marco; Tian, Zhen; Dedes, Georgios; Pompos, Arnold; Jiang, Steve B; Parodi, Katia; Jia, Xun
2018-01-01
One of the major benefits of carbon ion therapy is enhanced biological effectiveness at the Bragg peak region. For intensity modulated carbon ion therapy (IMCT), it is desirable to use Monte Carlo (MC) methods to compute the properties of each pencil beam spot for treatment planning, because of their accuracy in modeling physics processes and estimating biological effects. We previously developed goCMC, a graphics processing unit (GPU)-oriented MC engine for carbon ion therapy. The purpose of the present study was to build a biological treatment plan optimization system using goCMC. The repair-misrepair-fixation model was implemented to compute the spatial distribution of linear-quadratic model parameters for each spot. A treatment plan optimization module was developed to minimize the difference between the prescribed and actual biological effect. We used a gradient-based algorithm to solve the optimization problem. The system was embedded in the Varian Eclipse treatment planning system under a client-server architecture to achieve a user-friendly planning environment. We tested the system with a 1-dimensional homogeneous water case and 3 3-dimensional patient cases. Our system generated treatment plans with biological spread-out Bragg peaks covering the targeted regions and sparing critical structures. Using 4 NVidia GTX 1080 GPUs, the total computation time, including spot simulation, optimization, and final dose calculation, was 0.6 hour for the prostate case (8282 spots), 0.2 hour for the pancreas case (3795 spots), and 0.3 hour for the brain case (6724 spots). The computation time was dominated by MC spot simulation. We built a biological treatment plan optimization system for IMCT that performs simulations using a fast MC engine, goCMC. To the best of our knowledge, this is the first time that full MC-based IMCT inverse planning has been achieved in a clinically viable time frame. Copyright © 2017 Elsevier Inc. All rights reserved.
Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set
NASA Astrophysics Data System (ADS)
Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.
2017-12-01
In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.
Yang, Jie; Zhang, Pengcheng; Zhang, Liyuan; Shu, Huazhong; Li, Baosheng; Gui, Zhiguo
2017-01-01
In inverse treatment planning of intensity-modulated radiation therapy (IMRT), the objective function is typically the sum of the weighted sub-scores, where the weights indicate the importance of the sub-scores. To obtain a high-quality treatment plan, the planner manually adjusts the objective weights using a trial-and-error procedure until an acceptable plan is reached. In this work, a new particle swarm optimization (PSO) method which can adjust the weighting factors automatically was investigated to overcome the requirement of manual adjustment, thereby reducing the workload of the human planner and contributing to the development of a fully automated planning process. The proposed optimization method consists of three steps. (i) First, a swarm of weighting factors (i.e., particles) is initialized randomly in the search space, where each particle corresponds to a global objective function. (ii) Then, a plan optimization solver is employed to obtain the optimal solution for each particle, and the values of the evaluation functions used to determine the particle's location and the population global location for the PSO are calculated based on these results. (iii) Next, the weighting factors are updated based on the particle's location and the population global location. Step (ii) is performed alternately with step (iii) until the termination condition is reached. In this method, the evaluation function is a combination of several key points on the dose volume histograms. Furthermore, a perturbation strategy - the crossover and mutation operator hybrid approach - is employed to enhance the population diversity, and two arguments are applied to the evaluation function to improve the flexibility of the algorithm. In this study, the proposed method was used to develop IMRT treatment plans involving five unequally spaced 6MV photon beams for 10 prostate cancer cases. The proposed optimization algorithm yielded high-quality plans for all of the cases, without human planner intervention. A comparison of the results with the optimized solution obtained using a similar optimization model but with human planner intervention revealed that the proposed algorithm produced optimized plans superior to that developed using the manual plan. The proposed algorithm can generate admissible solutions within reasonable computational times and can be used to develop fully automated IMRT treatment planning methods, thus reducing human planners' workloads during iterative processes. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riofrio, D; Luan, S; Zhou, J
Purpose: In prostate HDR brachytherapy, interstitial implants are placed manually on the fly. The aim for this research is to develop a computer algorithm to find optimal and reliable implant trajectories using minimal number of implants. Methods: Our new algorithm mainly uses these key ideas: (1) positive charged static particles are uniformly placed on the surface of prostate and critical structures such as urethra, bladder, and rectum. (2) Positive charged kinetic particles are placed at a cross-section of the prostate with an initial velocity parallel to the principal implant direction. (3) The kinetic particles move through the prostate, interacting withmore » each other, spreading out, while staying away from the prostate surface and critical structures. The initial velocity ensures that the trajectories observe the curvature constraints of typical implant procedures. (4) The finial trajectories of kinetic particles are smoothed using a third-degree polynomial regression, which become the implant trajectories. (5) The dwelling times and final dose distribution are calculated using least-distance programming. Results: (1) We experimented with previously treated cases. Our plan achieves all prescription goals while reducing the number of implants by 41%! Our plan also has less uniform target dose, which implies a higher dose is delivered to the prostate. (2) We expect future implant procedures will be performed under the guidance of such pre-calculated trajectories. To assess the applicability, we randomly perturb the tracks to mimic the manual implant errors. Our studies showed the impact of these perturbations are negligible, which is compensated by the least distance programming. Conclusions: We developed a new inverse planning system for prostate HDR therapy that can find optimal implant trajectories while minimizing the number of implants. For future work, we plan to integrate our new inverse planning system with an existing needle tracking system.« less
Prakash, Punit; Salgaonkar, Vasant A.; Diederich, Chris J.
2014-01-01
Endoluminal and catheter-based ultrasound applicators are currently under development and are in clinical use for minimally invasive hyperthermia and thermal ablation of various tissue targets. Computational models play a critical role in in device design and optimization, assessment of therapeutic feasibility and safety, devising treatment monitoring and feedback control strategies, and performing patient-specific treatment planning with this technology. The critical aspects of theoretical modeling, applied specifically to endoluminal and interstitial ultrasound thermotherapy, are reviewed. Principles and practical techniques for modeling acoustic energy deposition, bioheat transfer, thermal tissue damage, and dynamic changes in the physical and physiological state of tissue are reviewed. The integration of these models and applications of simulation techniques in identification of device design parameters, development of real time feedback-control platforms, assessing the quality and safety of treatment delivery strategies, and optimization of inverse treatment plans are presented. PMID:23738697
NASA Technical Reports Server (NTRS)
Liu, Gao-Lian
1991-01-01
Advances in inverse design and optimization theory in engineering fields in China are presented. Two original approaches, the image-space approach and the variational approach, are discussed in terms of turbomachine aerodynamic inverse design. Other areas of research in turbomachine aerodynamic inverse design include the improved mean-streamline (stream surface) method and optimization theory based on optimal control. Among the additional engineering fields discussed are the following: the inverse problem of heat conduction, free-surface flow, variational cogeneration of optimal grid and flow field, and optimal meshing theory of gears.
A stochastic framework for spot-scanning particle therapy.
Robini, Marc; Yuemin Zhu; Wanyu Liu; Magnin, Isabelle
2016-08-01
In spot-scanning particle therapy, inverse treatment planning is usually limited to finding the optimal beam fluences given the beam trajectories and energies. We address the much more challenging problem of jointly optimizing the beam fluences, trajectories and energies. For this purpose, we design a simulated annealing algorithm with an exploration mechanism that balances the conflicting demands of a small mixing time at high temperatures and a reasonable acceptance rate at low temperatures. Numerical experiments substantiate the relevance of our approach and open new horizons to spot-scanning particle therapy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guida, K; Qamar, K; Thompson, M
Purpose: The RTOG 1005 trial offered a hypofractionated arm in delivering WBRT+SIB. Traditionally, treatments were planned at our institution using field-in-field (FiF) tangents with a concurrent 3D conformal boost. With the availability of VMAT, it is possible that a hybrid VMAT-3D planning technique could provide another avenue in treating WBRT+SIB. Methods: A retrospective study of nine patients previously treated using RTOG 1005 guidelines was performed to compare FiF+3D plans with the hybrid technique. A combination of static tangents and partial VMAT arcs were used in base-dose optimization. The hybrid plans were optimized to deliver 4005cGy to the breast PTVeval andmore » 4800cGy to the lumpectomy PTVeval over 15 fractions. Plans were optimized to meet the planning goals dictated by RTOG 1005. Results: Hybrid plans yielded similar coverage of breast and lumpectomy PTVs (average D95 of 4013cGy compared to 3990cGy for conventional), while reducing the volume of high dose within the breast; the average D30 and D50 for the hybrid technique were 4517cGy and 4288cGy, compared to 4704cGy and 4377cGy for conventional planning. Hybrid plans increased conformity as well, yielding CI95% values of 1.22 and 1.54 for breast and lumpectomy PTVeval volumes; in contrast, conventional plans averaged 1.49 and 2.27, respectively. The nearby organs at risk (OARs) received more low dose with the hybrid plans due to low dose spray from the partial arcs, but all hybrid plans did meet the acceptable constraints, at a minimum, from the protocol. Treatment planning time was also reduced, as plans were inversely optimized (VMAT) rather than forward optimized. Conclusion: Hybrid-VMAT could be a solution in delivering WB+SIB, as plans yield very conformal treatment plans and maintain clinical standards in OAR sparing. For treating breast cancer patients with a simultaneously-integrated boost, Hybrid-VMAT offers superiority in dosimetric conformity and planning time as compared to FIF techniques.« less
NASA Astrophysics Data System (ADS)
Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo
2018-01-01
We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of 26% in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.
Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo
2018-01-16
We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of [Formula: see text] in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.
Prostate Dose Escalation by Innovative Inverse Planning-Driven IMRT
2006-11-01
fLJ and at each step, we find the minimizer u,\\ of J’. The Euler-Lagrange equation for the regularized J’ functional is u- div ( 1 Vu )= f E S1,2A...GD, Agazaryan N, Solberg TD . 2003. The effects of tumor motion on planning and delivery of respiratory-gated IMRT. Med Phys 30:1052-1066. Jaffray DA...modulated) radiation therapy: a review. Phys Med Biol 51 :R403-425. Wink NM, McNitt-Gray MF, Solberg TD . 2005. Optimization of multi-slice helical
Zhang, Huaguang; Feng, Tao; Yang, Guang-Hong; Liang, Hongjing
2015-07-01
In this paper, the inverse optimal approach is employed to design distributed consensus protocols that guarantee consensus and global optimality with respect to some quadratic performance indexes for identical linear systems on a directed graph. The inverse optimal theory is developed by introducing the notion of partial stability. As a result, the necessary and sufficient conditions for inverse optimality are proposed. By means of the developed inverse optimal theory, the necessary and sufficient conditions are established for globally optimal cooperative control problems on directed graphs. Basic optimal cooperative design procedures are given based on asymptotic properties of the resulting optimal distributed consensus protocols, and the multiagent systems can reach desired consensus performance (convergence rate and damping rate) asymptotically. Finally, two examples are given to illustrate the effectiveness of the proposed methods.
Maximizing the potential of direct aperture optimization through collimator rotation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milette, Marie-Pierre; Otto, Karl; Medical Physics, BC Cancer Agency-Vancouver Centre, Vancouver, British Columbia
Intensity-modulated radiation therapy (IMRT) treatment plans are conventionally produced by the optimization of fluence maps followed by a leaf sequencing step. An alternative to fluence based inverse planning is to optimize directly the leaf positions and field weights of multileaf collimator (MLC) apertures. This approach is typically referred to as direct aperture optimization (DAO). It has been shown that equivalent dose distributions may be generated that have substantially fewer monitor units (MU) and number of apertures compared to fluence based optimization techniques. Here we introduce a DAO technique with rotated apertures that we call rotating aperture optimization (RAO). The advantagesmore » of collimator rotation in IMRT have been shown previously and include higher fluence spatial resolution, increased flexibility in the generation of aperture shapes and less interleaf effects. We have tested our RAO algorithm on a complex C-shaped target, seven nasopharynx cancer recurrences, and one multitarget nasopharynx carcinoma patient. A study was performed in order to assess the capabilities of RAO as compared to fixed collimator angle DAO. The accuracy of fixed and rotated collimator aperture delivery was also verified. An analysis of the optimized treatment plans indicates that plans generated with RAO are as good as or better than DAO while maintaining a smaller number of apertures and MU than fluence based IMRT. Delivery verification results show that RAO is less sensitive to tongue and groove effects than DAO. Delivery time is currently increased due to the collimator rotation speed although this is a mechanical limitation that can be eliminated in the future.« less
A new Gamma Knife registered radiosurgery paradigm: Tomosurgery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, X.; Maciunas, R. J.; Dean, D.
This study proposes and simulates an inverse treatment planning and a continuous dose delivery approach for the Leksell Gamma Knife registered (LGK, Elekta, Stockholm, Sweden) which we refer to as 'Tomosurgery'. Tomosurgery uses an isocenter that moves within the irradiation field to continuously deliver the prescribed radiation dose in a raster-scanning format, slice by slice, within an intracranial lesion. Our Tomosurgery automated (inverse) treatment planning algorithm utilizes a two-stage optimization strategy. The first stage reduces the current three-dimensional (3D) treatment planning problem to a series of more easily solved 2D treatment planning subproblems. In the second stage, those 2D treatmentmore » plans are assembled to obtain a final 3D treatment plan for the entire lesion. We created Tomosurgery treatment plans for 11 patients who had already received manually-generated LGK treatment plans to treat brain tumors. For the seven cases without critical structures (CS), the Tomosurgery treatment plans showed borderline to significant improvement in within-tumor dose standard deviation (STD) (p<0.058, or p<0.011 excluding case 2) and conformality (p<0.042), respectively. In three of the four cases that presented CS, the Tomosurgery treatment plans showed no statistically significant improvements in dose conformality (p<0.184), and borderline significance in improving within-tumor dose homogeneity (p<0.054); CS damage measured by V{sub 20} or V{sub 30} (i.e., irradiated CS volume that receives {>=}20% or {>=}30% of the maximum dose) showed no significant improvement in the Tomosurgery treatment plans (p<0.345 and p<0.423, respectively). However, the overall CS dose volume histograms were improved in the Tomosurgery treatment plans. In addition, the LGK Tomosurgery inverse treatment planning required less time than standard of care, forward (manual) LGK treatment planning (i.e., 5-35 min vs 1-3 h) for all 11 cases. We expect that LGK Tomosurgery will speed treatment planning and improve treatment quality, especially for large and/or geometrically complex lesions. However, using only 4 mm collimators could greatly increase treatment plan delivery time for a large brain lesion. This issue is subject to further investigation.« less
SU-E-J-161: Inverse Problems for Optical Parameters in Laser Induced Thermal Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahrenholtz, SJ; Stafford, RJ; Fuentes, DT
Purpose: Magnetic resonance-guided laser-induced thermal therapy (MRgLITT) is investigated as a neurosurgical intervention for oncological applications throughout the body by active post market studies. Real-time MR temperature imaging is used to monitor ablative thermal delivery in the clinic. Additionally, brain MRgLITT could improve through effective planning for laser fiber's placement. Mathematical bioheat models have been extensively investigated but require reliable patient specific physical parameter data, e.g. optical parameters. This abstract applies an inverse problem algorithm to characterize optical parameter data obtained from previous MRgLITT interventions. Methods: The implemented inverse problem has three primary components: a parameter-space search algorithm, a physicsmore » model, and training data. First, the parameter-space search algorithm uses a gradient-based quasi-Newton method to optimize the effective optical attenuation coefficient, μ-eff. A parameter reduction reduces the amount of optical parameter-space the algorithm must search. Second, the physics model is a simplified bioheat model for homogeneous tissue where closed-form Green's functions represent the exact solution. Third, the training data was temperature imaging data from 23 MRgLITT oncological brain ablations (980 nm wavelength) from seven different patients. Results: To three significant figures, the descriptive statistics for μ-eff were 1470 m{sup −1} mean, 1360 m{sup −1} median, 369 m{sup −1} standard deviation, 933 m{sup −1} minimum and 2260 m{sup −1} maximum. The standard deviation normalized by the mean was 25.0%. The inverse problem took <30 minutes to optimize all 23 datasets. Conclusion: As expected, the inferred average is biased by underlying physics model. However, the standard deviation normalized by the mean is smaller than literature values and indicates an increased precision in the characterization of the optical parameters needed to plan MRgLITT procedures. This investigation demonstrates the potential for the optimization and validation of more sophisticated bioheat models that incorporate the uncertainty of the data into the predictions, e.g. stochastic finite element methods.« less
Dose-mass inverse optimization for minimally moving thoracic lesions
NASA Astrophysics Data System (ADS)
Mihaylov, I. B.; Moros, E. G.
2015-05-01
In the past decade, several different radiotherapy treatment plan evaluation and optimization schemes have been proposed as viable approaches, aiming for dose escalation or an increase of healthy tissue sparing. In particular, it has been argued that dose-mass plan evaluation and treatment plan optimization might be viable alternatives to the standard of care, which is realized through dose-volume evaluation and optimization. The purpose of this investigation is to apply dose-mass optimization to a cohort of lung cancer patients and compare the achievable healthy tissue sparing to that one achievable through dose-volume optimization. Fourteen non-small cell lung cancer (NSCLC) patient plans were studied retrospectively. The range of tumor motion was less than 0.5 cm and motion management in the treatment planning process was not considered. For each case, dose-volume (DV)-based and dose-mass (DM)-based optimization was performed. Nine-field step-and-shoot IMRT was used, with all of the optimization parameters kept the same between DV and DM optimizations. Commonly used dosimetric indices (DIs) such as dose to 1% the spinal cord volume, dose to 50% of the esophageal volume, and doses to 20 and 30% of healthy lung volumes were used for cross-comparison. Similarly, mass-based indices (MIs), such as doses to 20 and 30% of healthy lung masses, 1% of spinal cord mass, and 33% of heart mass, were also tallied. Statistical equivalence tests were performed to quantify the findings for the entire patient cohort. Both DV and DM plans for each case were normalized such that 95% of the planning target volume received the prescribed dose. DM optimization resulted in more organs at risk (OAR) sparing than DV optimization. The average sparing of cord, heart, and esophagus was 23, 4, and 6%, respectively. For the majority of the DIs, DM optimization resulted in lower lung doses. On average, the doses to 20 and 30% of healthy lung were lower by approximately 3 and 4%, whereas lung volumes receiving 2000 and 3000 cGy were lower by 3 and 2%, respectively. The behavior of MIs was very similar. The statistical analyses of the results again indicated better healthy anatomical structure sparing with DM optimization. The presented findings indicate that dose-mass-based optimization results in statistically significant OAR sparing as compared to dose-volume-based optimization for NSCLC. However, the sparing is case-dependent and it is not observed for all tallied dosimetric endpoints.
Liu, Wei; Li, Yupeng; Li, Xiaoqiang; Cao, Wenhua; Zhang, Xiaodong
2012-01-01
Purpose: The distal edge tracking (DET) technique in intensity-modulated proton therapy (IMPT) allows for high energy efficiency, fast and simple delivery, and simple inverse treatment planning; however, it is highly sensitive to uncertainties. In this study, the authors explored the application of DET in IMPT (IMPT-DET) and conducted robust optimization of IMPT-DET to see if the planning technique’s sensitivity to uncertainties was reduced. They also compared conventional and robust optimization of IMPT-DET with three-dimensional IMPT (IMPT-3D) to gain understanding about how plan robustness is achieved. Methods: They compared the robustness of IMPT-DET and IMPT-3D plans to uncertainties by analyzing plans created for a typical prostate cancer case and a base of skull (BOS) cancer case (using data for patients who had undergone proton therapy at our institution). Spots with the highest and second highest energy layers were chosen so that the Bragg peak would be at the distal edge of the targets in IMPT-DET using 36 equally spaced angle beams; in IMPT-3D, 3 beams with angles chosen by a beam angle optimization algorithm were planned. Dose contributions for a number of range and setup uncertainties were calculated, and a worst-case robust optimization was performed. A robust quantification technique was used to evaluate the plans’ sensitivity to uncertainties. Results: With no uncertainties considered, the DET is less robust to uncertainties than is the 3D method but offers better normal tissue protection. With robust optimization to account for range and setup uncertainties, robust optimization can improve the robustness of IMPT plans to uncertainties; however, our findings show the extent of improvement varies. Conclusions: IMPT’s sensitivity to uncertainties can be improved by using robust optimization. They found two possible mechanisms that made improvements possible: (1) a localized single-field uniform dose distribution (LSFUD) mechanism, in which the optimization algorithm attempts to produce a single-field uniform dose distribution while minimizing the patching field as much as possible; and (2) perturbed dose distribution, which follows the change in anatomical geometry. Multiple-instance optimization has more knowledge of the influence matrices; this greater knowledge improves IMPT plans’ ability to retain robustness despite the presence of uncertainties. PMID:22755694
NASA Astrophysics Data System (ADS)
Sánchez-Parcerisa, D.; Kondrla, M.; Shaindlin, A.; Carabe, A.
2014-12-01
FoCa is an in-house modular treatment planning system, developed entirely in MATLAB, which includes forward dose calculation of proton radiotherapy plans in both active and passive modalities as well as a generic optimization suite for inverse treatment planning. The software has a dual education and research purpose. From the educational point of view, it can be an invaluable teaching tool for educating medical physicists, showing the insights of a treatment planning system from a well-known and widely accessible software platform. From the research point of view, its current and potential uses range from the fast calculation of any physical, radiobiological or clinical quantity in a patient CT geometry, to the development of new treatment modalities not yet available in commercial treatment planning systems. The physical models in FoCa were compared with the commissioning data from our institution and show an excellent agreement in depth dose distributions and longitudinal and transversal fluence profiles for both passive scattering and active scanning modalities. 3D dose distributions in phantom and patient geometries were compared with a commercial treatment planning system, yielding a gamma-index pass rate of above 94% (using FoCa’s most accurate algorithm) for all cases considered. Finally, the inverse treatment planning suite was used to produce the first prototype of intensity-modulated, passive-scattered proton therapy, using 13 passive scattering proton fields and multi-leaf modulation to produce a concave dose distribution on a cylindrical solid water phantom without any field-specific compensator.
Smith, Wade P; Kim, Minsun; Holdsworth, Clay; Liao, Jay; Phillips, Mark H
2016-03-11
To build a new treatment planning approach that extends beyond radiation transport and IMRT optimization by modeling the radiation therapy process and prognostic indicators for more outcome-focused decision making. An in-house treatment planning system was modified to include multiobjective inverse planning, a probabilistic outcome model, and a multi-attribute decision aid. A genetic algorithm generated a set of plans embodying trade-offs between the separate objectives. An influence diagram network modeled the radiation therapy process of prostate cancer using expert opinion, results of clinical trials, and published research. A Markov model calculated a quality adjusted life expectancy (QALE), which was the endpoint for ranking plans. The Multiobjective Evolutionary Algorithm (MOEA) was designed to produce an approximation of the Pareto Front representing optimal tradeoffs for IMRT plans. Prognostic information from the dosimetrics of the plans, and from patient-specific clinical variables were combined by the influence diagram. QALEs were calculated for each plan for each set of patient characteristics. Sensitivity analyses were conducted to explore changes in outcomes for variations in patient characteristics and dosimetric variables. The model calculated life expectancies that were in agreement with an independent clinical study. The radiation therapy model proposed has integrated a number of different physical, biological and clinical models into a more comprehensive model. It illustrates a number of the critical aspects of treatment planning that can be improved and represents a more detailed description of the therapy process. A Markov model was implemented to provide a stronger connection between dosimetric variables and clinical outcomes and could provide a practical, quantitative method for making difficult clinical decisions.
MO-FG-CAMPUS-TeP2-04: Optimizing for a Specified Target Coverage Probability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, A
2016-06-15
Purpose: The purpose of this work is to develop a method for inverse planning of radiation therapy margins. When using this method the user specifies a desired target coverage probability and the system optimizes to meet the demand without any explicit specification of margins to handle setup uncertainty. Methods: The method determines which voxels to include in an optimization function promoting target coverage in order to achieve a specified target coverage probability. Voxels are selected in a way that retains the correlation between them: The target is displaced according to the setup errors and the voxels to include are selectedmore » as the union of the displaced target regions under the x% best scenarios according to some quality measure. The quality measure could depend on the dose to the considered structure alone or could depend on the dose to multiple structures in order to take into account correlation between structures. Results: A target coverage function was applied to the CTV of a prostate case with prescription 78 Gy and compared to conventional planning using a DVH function on the PTV. Planning was performed to achieve 90% probability of CTV coverage. The plan optimized using the coverage probability function had P(D98 > 77.95 Gy) = 0.97 for the CTV. The PTV plan using a constraint on minimum DVH 78 Gy at 90% had P(D98 > 77.95) = 0.44 for the CTV. To match the coverage probability optimization, the DVH volume parameter had to be increased to 97% which resulted in 0.5 Gy higher average dose to the rectum. Conclusion: Optimizing a target coverage probability is an easily used method to find a margin that achieves the desired coverage probability. It can lead to reduced OAR doses at the same coverage probability compared to planning with margins and DVH functions.« less
Purdie, Thomas G; Dinniwell, Robert E; Letourneau, Daniel; Hill, Christine; Sharpe, Michael B
2011-10-01
To present an automated technique for two-field tangential breast intensity-modulated radiotherapy (IMRT) treatment planning. A total of 158 planned patients with Stage 0, I, and II breast cancer treated using whole-breast IMRT were retrospectively replanned using automated treatment planning tools. The tools developed are integrated into the existing clinical treatment planning system (Pinnacle(3)) and are designed to perform the manual volume delineation, beam placement, and IMRT treatment planning steps carried out by the treatment planning radiation therapist. The automated algorithm, using only the radio-opaque markers placed at CT simulation as inputs, optimizes the tangential beam parameters to geometrically minimize the amount of lung and heart treated while covering the whole-breast volume. The IMRT parameters are optimized according to the automatically delineated whole-breast volume. The mean time to generate a complete treatment plan was 6 min, 50 s ± 1 min 12 s. For the automated plans, 157 of 158 plans (99%) were deemed clinically acceptable, and 138 of 158 plans (87%) were deemed clinically improved or equal to the corresponding clinical plan when reviewed in a randomized, double-blinded study by one experienced breast radiation oncologist. In addition, overall the automated plans were dosimetrically equivalent to the clinical plans when scored for target coverage and lung and heart doses. We have developed robust and efficient automated tools for fully inversed planned tangential breast IMRT planning that can be readily integrated into clinical practice. The tools produce clinically acceptable plans using only the common anatomic landmarks from the CT simulation process as an input. We anticipate the tools will improve patient access to high-quality IMRT treatment by simplifying the planning process and will reduce the effort and cost of incorporating more advanced planning into clinical practice. Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.
Miklós, István; Darling, Aaron E
2009-06-22
Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.
Visual display aid for orbital maneuvering - Design considerations
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.; Ellis, Stephen R.
1993-01-01
This paper describes the development of an interactive proximity operations planning system that allows on-site planning of fuel-efficient multiburn maneuvers in a potential multispacecraft environment. Although this display system most directly assists planning by providing visual feedback to aid visualization of the trajectories and constraints, its most significant features include: (1) the use of an 'inverse dynamics' algorithm that removes control nonlinearities facing the operator, and (2) a trajectory planning technique that separates, through a 'geometric spreadsheet', the normally coupled complex problems of planning orbital maneuvers and allows solution by an iterative sequence of simple independent actions. The visual feedback of trajectory shapes and operational constraints, provided by user-transparent and continuously active background computations, allows the operator to make fast, iterative design changes that rapidly converge to fuel-efficient solutions. The planning tool provides an example of operator-assisted optimization of nonlinear cost functions.
NASA Technical Reports Server (NTRS)
Dulikravich, George S. (Editor)
1991-01-01
Papers from the Third International Conference on Inverse Design Concepts and Optimization in Engineering Sciences (ICIDES) are presented. The papers discuss current research in the general field of inverse, semi-inverse, and direct design and optimization in engineering sciences. The rapid growth of this relatively new field is due to the availability of faster and larger computing machines.
Sensor module design and forward and inverse kinematics analysis of 6-DOF sorting transferring robot
NASA Astrophysics Data System (ADS)
Zhou, Huiying; Lin, Jiajian; Liu, Lei; Tao, Meng
2017-09-01
To meet the demand of high strength express sorting, it is significant to design a robot with multiple degrees of freedom that can sort and transfer. This paper uses infrared sensor, color sensor and pressure sensor to receive external information, combine the plan of motion path in advance and the feedback information from the sensors, then write relevant program. In accordance with these, we can design a 6-DOF robot that can realize multi-angle seizing. In order to obtain characteristics of forward and inverse kinematics, this paper describes the coordinate directions and pose estimation by the D-H parameter method and closed solution. On the basis of the solution of forward and inverse kinematics, geometric parameters of links and link parameters are optimized in terms of application requirements. In this way, this robot can identify route, sort and transfer.
Wang, Huan; Dong, Peng; Liu, Hongcheng; Xing, Lei
2017-02-01
Current treatment planning remains a costly and labor intensive procedure and requires multiple trial-and-error adjustments of system parameters such as the weighting factors and prescriptions. The purpose of this work is to develop an autonomous treatment planning strategy with effective use of prior knowledge and in a clinically realistic treatment planning platform to facilitate radiation therapy workflow. Our technique consists of three major components: (i) a clinical treatment planning system (TPS); (ii) a formulation of decision-function constructed using an assemble of prior treatment plans; (iii) a plan evaluator or decision-function and an outer-loop optimization independent of the clinical TPS to assess the TPS-generated plan and to drive the search toward a solution optimizing the decision-function. Microsoft (MS) Visual Studio Coded UI is applied to record some common planner-TPS interactions as subroutines for querying and interacting with the TPS. These subroutines are called back in the outer-loop optimization program to navigate the plan selection process through the solution space iteratively. The utility of the approach is demonstrated by using clinical prostate and head-and-neck cases. An autonomous treatment planning technique with effective use of an assemble of prior treatment plans is developed to automatically maneuver the clinical treatment planning process in the platform of a commercial TPS. The process mimics the decision-making process of a human planner and provides a clinically sensible treatment plan automatically, thus reducing/eliminating the tedious manual trial-and-errors of treatment planning. It is found that the prostate and head-and-neck treatment plans generated using the approach compare favorably with that used for the patients' actual treatments. Clinical inverse treatment planning process can be automated effectively with the guidance of an assemble of prior treatment plans. The approach has the potential to significantly improve the radiation therapy workflow. © 2016 American Association of Physicists in Medicine.
An inverse dynamics approach to trajectory optimization and guidance for an aerospace plane
NASA Technical Reports Server (NTRS)
Lu, Ping
1992-01-01
The optimal ascent problem for an aerospace planes is formulated as an optimal inverse dynamic problem. Both minimum-fuel and minimax type of performance indices are considered. Some important features of the optimal trajectory and controls are used to construct a nonlinear feedback midcourse controller, which not only greatly simplifies the difficult constrained optimization problem and yields improved solutions, but is also suited for onboard implementation. Robust ascent guidance is obtained by using combination of feedback compensation and onboard generation of control through the inverse dynamics approach. Accurate orbital insertion can be achieved with near-optimal control of the rocket through inverse dynamics even in the presence of disturbances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Syh, J; Syh, J; Patel, B
Purpose: This case study was designated to confirm the optimized plan was used to treat skin surface of left leg in three stages. 1. To evaluate dose distribution and plan quality by alternating of the source loading catheters pattern in flexible Freiberg Flap skin surface (FFSS) applicator. 2. To investigate any impact on Dose Volume Histogram (DVH) of large superficial surface target volume coverage. 3. To compare the dose distribution if it was treated with electron beam. Methods: The Freiburg Flap is a flexible mesh style surface mold for skin radiation or intraoperative surface treatments. The Freiburg Flap consists ofmore » multiple spheres that are attached to each other, holding and guiding up to 18 treatment catheters. The Freiburg Flap also ensures a constant distance of 5mm from the treatment catheter to the surface. Three treatment trials with individual planning optimization were employed: 18 channels, 9 channels of FF and 6 MeV electron beam. The comparisons were highlighted in target coverage, dose conformity and dose sparing of surrounding tissues. Results: The first 18 channels brachytherapy plan was generated with 18 catheters inside the skin-wrapped up flap (Figure 1A). A second 9 catheters plan was generated associated with the same calculation points which were assigned to match prescription for target coverage as 18 catheters plan (Figure 1B). The optimized inverse plan was employed to reduce the dose to adjacent structures such as tibia or fibula. The comparison of DVH’s was depicted on Figure 2. External beam of electron RT plan was depicted in Figure 3. Overcall comparisons among these three were illustrated in Conclusion: The 9-channel Freiburg flap flexible skin applicator offers a reasonably acceptable plan without compromising the coverage. Electron beam was discouraged to use to treat curved skin surface because of low target coverage and high dose in adjacent tissues.« less
Flatness-based model inverse for feed-forward braking control
NASA Astrophysics Data System (ADS)
de Vries, Edwin; Fehn, Achim; Rixen, Daniel
2010-12-01
For modern cars an increasing number of driver assistance systems have been developed. Some of these systems interfere/assist with the braking of a car. Here, a brake actuation algorithm for each individual wheel that can respond to both driver inputs and artificial vehicle deceleration set points is developed. The algorithm consists of a feed-forward control that ensures, within the modelled system plant, the optimal behaviour of the vehicle. For the quarter-car model with LuGre-tyre behavioural model, an inverse model can be derived using v x as the 'flat output', that is, the input for the inverse model. A number of time derivatives of the flat output are required to calculate the model input, brake torque. Polynomial trajectory planning provides the needed time derivatives of the deceleration request. The transition time of the planning can be adjusted to meet actuator constraints. It is shown that the output of the trajectory planning would ripple and introduce a time delay when a gradual continuous increase of deceleration is requested by the driver. Derivative filters are then considered: the Bessel filter provides the best symmetry in its step response. A filter of same order and with negative real-poles is also used, exhibiting no overshoot nor ringing. For these reasons, the 'real-poles' filter would be preferred over the Bessel filter. The half-car model can be used to predict the change in normal load on the front and rear axle due to the pitching of the vehicle. The anticipated dynamic variation of the wheel load can be included in the inverse model, even though it is based on a quarter-car. Brake force distribution proportional to normal load is established. It provides more natural and simpler equations than a fixed force ratio strategy.
Darling, Aaron E.
2009-01-01
Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186
Toward a web-based real-time radiation treatment planning system in a cloud computing environment.
Na, Yong Hum; Suh, Tae-Suk; Kapp, Daniel S; Xing, Lei
2013-09-21
To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an 'on-demand' basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture's constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm(2)) from the Varian TrueBeam(TM) STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are identical to PC-based IMRT and VMAT plans, confirming the reliability of the cloud computing platform. This cloud computing infrastructure has been established for a radiation treatment planning. It substantially improves the speed of inverse planning and makes future on-treatment adaptive re-planning possible.
Toward a web-based real-time radiation treatment planning system in a cloud computing environment
NASA Astrophysics Data System (ADS)
Hum Na, Yong; Suh, Tae-Suk; Kapp, Daniel S.; Xing, Lei
2013-09-01
To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an ‘on-demand’ basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture’s constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm2) from the Varian TrueBeamTM STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are identical to PC-based IMRT and VMAT plans, confirming the reliability of the cloud computing platform. This cloud computing infrastructure has been established for a radiation treatment planning. It substantially improves the speed of inverse planning and makes future on-treatment adaptive re-planning possible.
MO-F-CAMPUS-T-03: Continuous Dose Delivery with Gamma Knife Perfexion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghobadi,; Li, W; Chung, C
2015-06-15
Purpose: We propose continuous dose delivery techniques for stereotactic treatments delivered by Gamma Knife Perfexion using inverse treatment planning system that can be applied to various tumour sites in the brain. We test the accuracy of the plans on Perfexion’s planning system (GammaPlan) to ensure the obtained plans are viable. This approach introduces continuous dose delivery for Perefxion, as opposed to the currently employed step-and-shoot approaches, for different tumour sites. Additionally, this is the first realization of automated inverse planning on GammaPlan. Methods: The inverse planning approach is divided into two steps of identifying a quality path inside the target,more » and finding the best collimator composition for the path. To find a path, we select strategic regions inside the target volume and find a path that visits each region exactly once. This path is then passed to a mathematical model which finds the best combination of collimators and their durations. The mathematical model minimizes the dose spillage to the surrounding tissues while ensuring the prescribed dose is delivered to the target(s). Organs-at-risk and their corresponding allowable doses can also be added to the model to protect adjacent organs. Results: We test this approach on various tumour sizes and sites. The quality of the obtained treatment plans are comparable or better than forward plans and inverse plans that use step- and-shoot technique. The conformity indices in the obtained continuous dose delivery plans are similar to those of forward plans while the beam-on time is improved on average (see Table 1 in supporting document). Conclusion: We employ inverse planning for continuous dose delivery in Perfexion for brain tumours. The quality of the obtained plans is similar to forward and inverse plans that use conventional step-and-shoot technique. We tested the inverse plans on GammaPlan to verify clinical relevance. This research was partially supported by Elekta, Sweden (vendor of Gamma Knife Perfexion)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sham, E; Sattarivand, M; Mulroy, L
Purpose: To evaluate planning performance of an automated treatment planning software (BrainLAB; Elements) for stereotactic radiosurgery (SRS) of multiple brain metastases. Methods: Brainlab’s Multiple Metastases Elements (MME) uses single isocentric technique to treat up to 10 cranial planning target volumes (PTVs). The planning algorithm of the MME accounts for multiple PTVs overlapping with one another on the beam eyes view (BEV) and automatically selects a subset of all overlapping PTVs on each arc for sparing normal tissues in the brain. The algorithm also optimizes collimator angles, margins between multi-leaf collimators (MLCs) and PTVs, as well as monitor units (MUs) usingmore » minimization of conformity index (CI) for all targets. Planning performance was evaluated by comparing the MME-calculated treatment plan parameters with the same parameters calculated with the Volumetric Modulated Arc Therapy (VMAT) optimization on Varian’s Eclipse platform. Results: Figures 1 to 3 compare several treatment plan outcomes calculated between the MME and VMAT for 5 clinical multi-targets SRS patient plans. Prescribed target dose was volume-dependent and defined based on the RTOG recommendation. For a total number of 18 PTV’s, mean values for the CI, PITV, and GI were comparable between the MME and VMAT within one standard deviation (σ). However, MME-calculated MDPD was larger than the same VMAT-calculated parameter. While both techniques delivered similar maximum point doses to the critical cranial structures and total MU’s for the 5 patient plans, the MME required less treatment planning time by an order of magnitude compared to VMAT. Conclusion: The MME and VMAT produce similar plan qualities in terms of MUs, target dose conformation, and OAR dose sparing. While the selective use of PTVs for arc-optimization with the MME reduces significantly the total planning time in comparison to VMAT, the target dose homogeneity was also compromised due to its simplified inverse planning algorithm used.« less
NASA Astrophysics Data System (ADS)
Borot de Battisti, M.; Maenhout, M.; de Senneville, B. Denis; Hautvast, G.; Binnekamp, D.; Lagendijk, J. J. W.; van Vulpen, M.; Moerland, M. A.
2015-10-01
Focal high-dose-rate (HDR) for prostate cancer has gained increasing interest as an alternative to whole gland therapy as it may contribute to the reduction of treatment related toxicity. For focal treatment, optimal needle guidance and placement is warranted. This can be achieved under MR guidance. However, MR-guided needle placement is currently not possible due to space restrictions in the closed MR bore. To overcome this problem, a MR-compatible, single-divergent needle-implant robotic device is under development at the University Medical Centre, Utrecht: placed between the legs of the patient inside the MR bore, this robot will tap the needle in a divergent pattern from a single rotation point into the tissue. This rotation point is just beneath the perineal skin to have access to the focal prostate tumor lesion. Currently, there is no treatment planning system commercially available which allows optimization of the dose distribution with such needle arrangement. The aim of this work is to develop an automatic inverse dose planning optimization tool for focal HDR prostate brachytherapy with needle insertions in a divergent configuration. A complete optimizer workflow is proposed which includes the determination of (1) the position of the center of rotation, (2) the needle angulations and (3) the dwell times. Unlike most currently used optimizers, no prior selection or adjustment of input parameters such as minimum or maximum dose or weight coefficients for treatment region and organs at risk is required. To test this optimizer, a planning study was performed on ten patients (treatment volumes ranged from 8.5 cm3to 23.3 cm3) by using 2-14 needle insertions. The total computation time of the optimizer workflow was below 20 min and a clinically acceptable plan was reached on average using only four needle insertions.
Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.
Ebert, M
1997-12-01
This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.
Thieke, Christian; Nill, Simeon; Oelfke, Uwe; Bortfeld, Thomas
2002-05-01
In inverse planning for intensity-modulated radiotherapy, the dose calculation is a crucial element limiting both the maximum achievable plan quality and the speed of the optimization process. One way to integrate accurate dose calculation algorithms into inverse planning is to precalculate the dose contribution of each beam element to each voxel for unit fluence. These precalculated values are stored in a big dose calculation matrix. Then the dose calculation during the iterative optimization process consists merely of matrix look-up and multiplication with the actual fluence values. However, because the dose calculation matrix can become very large, this ansatz requires a lot of computer memory and is still very time consuming, making it not practical for clinical routine without further modifications. In this work we present a new method to significantly reduce the number of entries in the dose calculation matrix. The method utilizes the fact that a photon pencil beam has a rapid radial dose falloff, and has very small dose values for the most part. In this low-dose part of the pencil beam, the dose contribution to a voxel is only integrated into the dose calculation matrix with a certain probability. Normalization with the reciprocal of this probability preserves the total energy, even though many matrix elements are omitted. Three probability distributions were tested to find the most accurate one for a given memory size. The sampling method is compared with the use of a fully filled matrix and with the well-known method of just cutting off the pencil beam at a certain lateral distance. A clinical example of a head and neck case is presented. It turns out that a sampled dose calculation matrix with only 1/3 of the entries of the fully filled matrix does not sacrifice the quality of the resulting plans, whereby the cutoff method results in a suboptimal treatment plan.
NASA Astrophysics Data System (ADS)
An, M.; Assumpcao, M.
2003-12-01
The joint inversion of receiver function and surface wave is an effective way to diminish the influences of the strong tradeoff among parameters and the different sensitivity to the model parameters in their respective inversions, but the inversion problem becomes more complex. Multi-objective problems can be much more complicated than single-objective inversion in the model selection and optimization. If objectives are involved and conflicting, models can be ordered only partially. In this case, Pareto-optimal preference should be used to select solutions. On the other hand, the inversion to get only a few optimal solutions can not deal properly with the strong tradeoff between parameters, the uncertainties in the observation, the geophysical complexities and even the incompetency of the inversion technique. The effective way is to retrieve the geophysical information statistically from many acceptable solutions, which requires more competent global algorithms. Competent genetic algorithms recently proposed are far superior to the conventional genetic algorithm and can solve hard problems quickly, reliably and accurately. In this work we used one of competent genetic algorithms, Bayesian Optimization Algorithm as the main inverse procedure. This algorithm uses Bayesian networks to draw out inherited information and can use Pareto-optimal preference in the inversion. With this algorithm, the lithospheric structure of Paran"› basin is inverted to fit both the observations of inter-station surface wave dispersion and receiver function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmed, Raef S.; Ove, Roger; Duan, Jun
2006-10-01
The treatment of maxillary sinus carcinoma with forward planning can be technically difficult when the neck also requires radiotherapy. This difficulty arises because of the need to spare the contralateral face while treating the bilateral neck. There is considerable potential for error in clinical setup and treatment delivery. We evaluated intensity-modulated radiotherapy (IMRT) as an improvement on forward planning, and compared several inverse planning IMRT platforms. A composite dose-volume histogram (DVH) was generated from a complex forward planned case. We compared the results with those generated by sliding window fixed field dynamic multileaf collimator (MLC) IMRT, using sets of coplanarmore » beams. All setups included an anterior posterior (AP) beam, and 3-, 5-, 7-, and 9-field configurations were evaluated. The dose prescription and objective function priorities were invariant. We also evaluated 2 commercial tomotherapy IMRT delivery platforms. DVH results from all of the IMRT approaches compared favorably with the forward plan. Results for the various inverse planning approaches varied considerably across platforms, despite an attempt to prescribe the therapy similarly. The improvement seen with the addition of beams in the fixed beam sliding window case was modest. IMRT is an effective means of delivering radiotherapy reliably in the complex setting of maxillary sinus carcinoma with neck irradiation. Differences in objective function definition and optimization algorithms can lead to unexpected differences in the final dose distribution, and our evaluation suggests that these factors are more significant than the beam arrangement or number of beams.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Safigholi, H; Mashouf, S; Soliman, A
Purpose: To evaluate the improvement in plan quality when various combinations of 192Ir, 60Co, and 169Yb sources are used in combination with a novel direction modulated brachytherapy (DMBT) tandem applicator for high dose rate brachytherapy of cervical cancer. Methods: The proposed DMBT tandem applicator is designed for image-guided adaptive brachytherapy (IGABT), especially MRI, of cervical cancer. It has 6 peripheral holes of 1.3-mm width, grooved along a 5.4-mm diameter nonmagnetic tungsten alloy rod of density 18.0 g/cc, capable of generating directional dose profiles - leading to enhanced dose sculpting capacity through inverse planning. Monte Carlo simulations of the three HDRmore » sources individually inside the DMBT applicator were performed and imported into an in-house developed inverse optimization code. We then performed inverse planning with 14 cervical cancer patients enrolled in EMBRACE study. In all patients, 3D MRI-based planning was performed while utilizing 1) tandem-ring and needles attached-to-ring (7 patients) and 2) tandem-ring and needles both attached-to-ring and free-hand-loaded (7 patients), in accordance with the GEC-ESTRO recommendations. All plans were normalized to receive the same HRCTV D90 and DVH parameters were evaluated. Results: The DMBT tandem was used in all cases. Overall, the combined use of two sources (192Ir-60Co and 192Ir-169Yb, but not 60Co-169Yb) generally produced better quality plans than with the 192Ir source alone in terms of sparing OARs. For example, up to 3.5, 4.4, and 3.9% individual reductions in D2cc were observed for the bladder, rectum, and sigmoid, respectively, between 192Ir-60Co and 192Ir-only plans for patient cases in #1. While up to 5.5, 2.0, and 5.7% individual reductions were observed for patient cases in #2. Conclusion: We have demonstrated that, in addition to “directional modulation” of DMBT, use of multiple sources with sufficient differences in energy can be utilized to achieve additional improvement in plan quality for IGABT of cervical cancer.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; UT Southwestern Medical Center, Dallas, TX; Tian, Z
2015-06-15
Purpose: Intensity-modulated proton therapy (IMPT) is increasingly used in proton therapy. For IMPT optimization, Monte Carlo (MC) is desired for spots dose calculations because of its high accuracy, especially in cases with a high level of heterogeneity. It is also preferred in biological optimization problems due to the capability of computing quantities related to biological effects. However, MC simulation is typically too slow to be used for this purpose. Although GPU-based MC engines have become available, the achieved efficiency is still not ideal. The purpose of this work is to develop a new optimization scheme to include GPU-based MC intomore » IMPT. Methods: A conventional approach using MC in IMPT simply calls the MC dose engine repeatedly for each spot dose calculations. However, this is not the optimal approach, because of the unnecessary computations on some spots that turned out to have very small weights after solving the optimization problem. GPU-memory writing conflict occurring at a small beam size also reduces computational efficiency. To solve these problems, we developed a new framework that iteratively performs MC dose calculations and plan optimizations. At each dose calculation step, the particles were sampled from different spots altogether with Metropolis algorithm, such that the particle number is proportional to the latest optimized spot intensity. Simultaneously transporting particles from multiple spots also mitigated the memory writing conflict problem. Results: We have validated the proposed MC-based optimization schemes in one prostate case. The total computation time of our method was ∼5–6 min on one NVIDIA GPU card, including both spot dose calculation and plan optimization, whereas a conventional method naively using the same GPU-based MC engine were ∼3 times slower. Conclusion: A fast GPU-based MC dose calculation method along with a novel optimization workflow is developed. The high efficiency makes it attractive for clinical usages.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Runqing; Zhan, Lixin; Osei, Ernest
2016-08-15
Introduction: The purpose of this study is to investigate the effects of daily setup variations on prone breast forward field-in-field (FinF) and inverse IMRT treatment planning. Methods: Rando Phantom (Left breast) and Pixy phantom (Right breast) were built and CT scanned in prone position. The treatment planning (TP) is performed in Eclipse TP system. Forward FinF plan and inverse IMRT plan were created to satisfy the CTV coverage and OARs criteria. The daily setup variations were assumed to be 5 mm at left-right, superior-inferior, and anterior-posterior directions. The DVHs of CTV coverage and OARs were compared for both forward FinFmore » plan and inverse IMRT plans due to 5mm setup variation. Results and Discussions: DVHs of CTV coverage had fewer variations for 5m setup variation for forward FinF and inverse IMRT plan for both phantoms. However, for the setup variations in the left-right direction, the DVH of CTV coverage of IMRT plan showed the worst variation due to lateral setup variation for both phantoms. For anterior-posterior variation, the CTV could not get full coverage when the breast chest wall is shallow; however, with the guidance of MV imaging, breast chest wall will be checked during the MV imaging setup. So the setup variations have more effects on inverse IMRT plan, compared to forward FinF plan, especially in the left-right direction. Conclusions: The Forward FinF plan was recommended clinically considering daily setup variation.« less
Matuszak, Martha M; Steers, Jennifer M; Long, Troy; McShan, Daniel L; Fraass, Benedick A; Romeijn, H Edwin; Ten Haken, Randall K
2013-07-01
To introduce a hybrid volumetric modulated arc therapy/intensity modulated radiation therapy (VMAT/IMRT) optimization strategy called FusionArc that combines the delivery efficiency of single-arc VMAT with the potentially desirable intensity modulation possible with IMRT. A beamlet-based inverse planning system was enhanced to combine the advantages of VMAT and IMRT into one comprehensive technique. In the hybrid strategy, baseline single-arc VMAT plans are optimized and then the current cost function gradients with respect to the beamlets are used to define a metric for predicting which beam angles would benefit from further intensity modulation. Beams with the highest metric values (called the gradient factor) are converted from VMAT apertures to IMRT fluence, and the optimization proceeds with the mixed variable set until convergence or until additional beams are selected for conversion. One phantom and two clinical cases were used to validate the gradient factor and characterize the FusionArc strategy. Comparisons were made between standard IMRT, single-arc VMAT, and FusionArc plans with one to five IMRT∕hybrid beams. The gradient factor was found to be highly predictive of the VMAT angles that would benefit plan quality the most from beam modulation. Over the three cases studied, a FusionArc plan with three converted beams achieved superior dosimetric quality with reductions in final cost ranging from 26.4% to 48.1% compared to single-arc VMAT. Additionally, the three beam FusionArc plans required 22.4%-43.7% fewer MU∕Gy than a seven beam IMRT plan. While the FusionArc plans with five converted beams offer larger reductions in final cost--32.9%-55.2% compared to single-arc VMAT--the decrease in MU∕Gy compared to IMRT was noticeably smaller at 12.2%-18.5%, when compared to IMRT. A hybrid VMAT∕IMRT strategy was implemented to find a high quality compromise between gantry-angle and intensity-based degrees of freedom. This optimization method will allow patients to be simultaneously planned for dosimetric quality and delivery efficiency without switching between delivery techniques. Example phantom and clinical cases suggest that the conversion of only three VMAT segments to modulated beams may result in a good combination of quality and efficiency.
An inverse dynamics approach to trajectory optimization for an aerospace plane
NASA Technical Reports Server (NTRS)
Lu, Ping
1992-01-01
An inverse dynamics approach for trajectory optimization is proposed. This technique can be useful in many difficult trajectory optimization and control problems. The application of the approach is exemplified by ascent trajectory optimization for an aerospace plane. Both minimum-fuel and minimax types of performance indices are considered. When rocket augmentation is available for ascent, it is shown that accurate orbital insertion can be achieved through the inverse control of the rocket in the presence of disturbances.
Gravity inversion of a fault by Particle swarm optimization (PSO).
Toushmalani, Reza
2013-01-01
Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haber, Eldad
2014-03-17
The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequality constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.
Treuer, Harald; Hoevels, Moritz; Luyken, Klaus; Visser-Vandewalle, Veerle; Wirths, Jochen; Kocher, Martin; Ruge, Maximilian
2015-06-01
Stereotactic radiosurgery with an adapted linear accelerator (linac-SRS) is an established therapy option for brain metastases, benign brain tumors, and arteriovenous malformations. We intended to investigate whether the dosimetric quality of treatment plans achieved with a CyberKnife (CK) is at least equivalent to that for linac-SRS with circular or micromultileaf collimators (microMLC). A random sample of 16 patients with 23 target volumes, previously treated with linac-SRS, was replanned with CK. Planning constraints were identical dose prescription and clinical applicability. In all cases uniform optimization scripts and inverse planning objectives were used. Plans were compared with respect to coverage, minimal dose within target volume, conformity index, and volume of brain tissue irradiated with ≥ 10 Gy. Generating the CK plan was unproblematic with simple optimization scripts in all cases. With the CK plans, coverage, minimal target volume dosage, and conformity index were significantly better, while no significant improvement could be shown regarding the 10 Gy volume. Multiobjective comparison for the irradiated target volumes was superior in the CK plan in 20 out of 23 cases and equivalent in 3 out of 23 cases. Multiobjective comparison for the treated patients was superior in the CK plan in all 16 cases. The results clearly demonstrate the superiority of the irradiation plan for CK compared to classical linac-SRS with circular collimators and microMLC. In particular, the average minimal target volume dose per patient, increased by 1.9 Gy, and at the same time a 14% better conformation index seems to be an improvement with clinical relevance.
NASA Astrophysics Data System (ADS)
Huhn, Stefan; Peeling, Derek; Burkart, Maximilian
2017-10-01
With the availability of die face design tools and incremental solver technologies to provide detailed forming feasibility results in a timely fashion, the use of inverse solver technologies and resulting process improvements during the product development process of stamped parts often is underestimated. This paper presents some applications of inverse technologies that are currently used in the automotive industry to streamline the product development process and greatly increase the quality of a developed process and the resulting product. The first focus is on the so-called target strain technology. Application examples will show how inverse forming analysis can be applied to support the process engineer during the development of a die face geometry for Class `A' panels. The drawing process is greatly affected by the die face design and the process designer has to ensure that the resulting drawn panel will meet specific requirements regarding surface quality and a minimum strain distribution to ensure dent resistance. The target strain technology provides almost immediate feedback to the process engineer during the die face design process if a specific change of the die face design will help to achieve these specific requirements or will be counterproductive. The paper will further show how an optimization of the material flow can be achieved through the use of a newly developed technology called Sculptured Die Face (SDF). The die face generation in SDF is more suited to be used in optimization loops than any other conventional die face design technology based on cross section design. A second focus in this paper is on the use of inverse solver technologies for secondary forming operations. The paper will show how the application of inverse technology can be used to accurately and quickly develop trim lines on simple as well as on complex support geometries.
Viscoelastic material inversion using Sierra-SD and ROL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, Timothy; Aquino, Wilkins; Ridzal, Denis
2014-11-01
In this report we derive frequency-domain methods for inverse characterization of the constitutive parameters of viscoelastic materials. The inverse problem is cast in a PDE-constrained optimization framework with efficient computation of gradients and Hessian vector products through matrix free operations. The abstract optimization operators for first and second derivatives are derived from first principles. Various methods from the Rapid Optimization Library (ROL) are tested on the viscoelastic inversion problem. The methods described herein are applied to compute the viscoelastic bulk and shear moduli of a foam block model, which was recently used in experimental testing for viscoelastic property characterization.
USDA-ARS?s Scientific Manuscript database
Determination of the optical properties from intact biological materials based on diffusion approximation theory is a complicated inverse problem, and it requires proper implementation of inverse algorithm, instrumentation, and experiment. This work was aimed at optimizing the procedure of estimatin...
Research on inverse methods and optimization in Italy
NASA Technical Reports Server (NTRS)
Larocca, Francesco
1991-01-01
The research activities in Italy on inverse design and optimization are reviewed. The review is focused on aerodynamic aspects in turbomachinery and wing section design. Inverse design of blade rows and ducts of turbomachinery in subsonic and transonic regime are illustrated by the Politecnico di Torino and turbomachinery industry (FIAT AVIO).
TH-C-12A-04: Dosimetric Evaluation of a Modulated Arc Technique for Total Body Irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsiamas, P; Czerminska, M; Makrigiorgos, G
2014-06-15
Purpose: A simplified Total Body Irradiation (TBI) was developed to work with minimal requirements in a compact linac room without custom motorized TBI couch. Results were compared to our existing fixed-gantry double 4 MV linac TBI system with prone patient and simultaneous AP/PA irradiation. Methods: Modulated arc irradiates patient positioned in prone/supine positions along the craniocaudal axis. A simplified inverse planning method developed to optimize dose rate as a function of gantry angle for various patient sizes without the need of graphical 3D treatment planning system. This method can be easily adapted and used with minimal resources. Fixed maximum fieldmore » size (40×40 cm2) is used to decrease radiation delivery time. Dose rate as a function of gantry angle is optimized to result in uniform dose inside rectangular phantoms of various sizes and a custom VMAT DICOM plans were generated using a DICOM editor tool. Monte Carlo simulations, film and ionization chamber dosimetry for various setups were used to derive and test an extended SSD beam model based on PDD/OAR profiles for Varian 6EX/ TX. Measurements were obtained using solid water phantoms. Dose rate modulation function was determined for various size patients (100cm − 200cm). Depending on the size of the patient arc range varied from 100° to 120°. Results: A PDD/OAR based beam model for modulated arc TBI therapy was developed. Lateral dose profiles produced were similar to profiles of our existing TBI facility. Calculated delivery time and full arc depended on the size of the patient (∼8min/ 100° − 10min/ 120°, 100 cGy). Dose heterogeneity varied by about ±5% − ±10% depending on the patient size and distance to the surface (buildup region). Conclusion: TBI using simplified modulated arc along craniocaudal axis of different size patients positioned on the floor can be achieved without graphical / inverse 3D planning.« less
NASA Astrophysics Data System (ADS)
Zhang, Pengpeng
The Leksell Gamma KnifeRTM (LGK) is a tool for providing accurate stereotactic radiosurgical treatment of brain lesions, especially tumors. Currently, the treatment planning team "forward" plans radiation treatment parameters while viewing a series of 2D MR scans. This primarily manual process is cumbersome and time consuming because the difficulty in visualizing the large search space for the radiation parameters (i.e., shot overlap, number, location, size, and weight). I hypothesize that a computer-aided "inverse" planning procedure that utilizes tumor geometry and treatment goals could significantly improve the planning process and therapeutic outcome of LGK radiosurgery. My basic observation is that the treatment team is best at identification of the location of the lesion and prescribing a lethal, yet safe, radiation dose. The treatment planning computer is best at determining both the 3D tumor geometry and optimal LGK shot parameters necessary to deliver a desirable dose pattern to the tumor while sparing adjacent normal tissue. My treatment planning procedure asks the neurosurgeon to identify the tumor and critical structures in MR images and the oncologist to prescribe a tumoricidal radiation dose. Computer-assistance begins with geometric modeling of the 3D tumor's medial axis properties. This begins with a new algorithm, a Gradient-Phase Plot (G-P Plot) decomposition of the tumor object's medial axis. I have found that medial axis seeding, while insufficient in most cases to produce an acceptable treatment plan, greatly reduces the solution space for Guided Evolutionary Simulated Annealing (GESA) treatment plan optimization by specifying an initial estimate for shot number, size, and location, but not weight. They are used to generate multiple initial plans which become initial seed plans for GESA. The shot location and weight parameters evolve and compete in the GESA procedure. The GESA objective function optimizes tumor irradiation (i.e., as close to the prescribed dose as possible) and minimizes normal tissue and critical structure damage. In tests of five patient data sets (4 acoustic neuromas and 1 meningioma), the G-P Plot/GESA-generated treatment plans improved conformality of the lethal dose to the tumor, required no human interaction, improved dose homogeneity, suggested use of fewer shots, and reduced treatment administration time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Irazola, L; Sanchez-Doblado, F; Servicio de Radiofisica, Hospital Universitario Virgen Macarena, Seville
2015-06-15
Purpose: Differences between radiotherapy techniques and energies, can offer improvements in tumor coverage and organs at risk preservation. However, a more complete decision should include peripheral doses delivered to the patient. The purpose of this work is the balance of photon and neutron peripheral doses for a prostate case solved with 6 different treatment modalities. Methods: Inverse and Forward IMRT and 3D-CRT in 6 and 15 MV for a Siemens Primus linac, using the same CT data set and contours. The methodology described in [1], was used with the TNRD thermal neutron detector [2] for neutron peripheral dose estimation atmore » 7 relevant organs (colon, esophagus, stomach, liver, lung, thyroid and skin). Photon doses were estimated for these organs by terms of the algorithm proposed in [3]. Plans were optimized with the same restrictions and limited to 30 segments in the Inverse case. Results: A similar photon peripheral dose was found comparing 6 and 15 MV cases with slightly higher values of (1.9 ± 1.6) % in mean, for the 6 MV cases. Neutron presence when using 15 MV, represents an increase in peripheral dose of (18 ± 17) % in average. Due to the higher number of MU used in Inverse IMRT, an increasing of (22 ± 3) % in neutron dose is found related to Forward and 3D-CRT plans. This corresponds to photon doses within 44 and 255 mSv along the organs, for a dose prescription of 68 Gy at the isocenter. Conclusion: Neutron and photon peripheral doses for a prostate treatment planified in 6 different techniques have been analyzed. 6 MV plans are slightly more demanding in terms of photon peripheral doses. Inverse technique in 15 MV has Result to be the most demanding one in terms of total peripheral doses, including neutrons and photons.« less
Penalization of aperture complexity in inversely planned volumetric modulated arc therapy
Younge, Kelly C.; Matuszak, Martha M.; Moran, Jean M.; McShan, Daniel L.; Fraass, Benedick A.; Roberts, Donald A.
2012-01-01
Purpose: Apertures obtained during volumetric modulated arc therapy (VMAT) planning can be small and irregular, resulting in dosimetric inaccuracies during delivery. Our purpose is to develop and integrate an aperture-regularization objective function into the optimization process for VMAT, and to quantify the impact of using this objective function on dose delivery accuracy and optimized dose distributions. Methods: An aperture-based metric (“edge penalty”) was developed that penalizes complex aperture shapes based on the ratio of MLC side edge length and aperture area. To assess the utility of the metric, VMAT plans were created for example paraspinal, brain, and liver SBRT cases with and without incorporating the edge penalty in the cost function. To investigate the dose calculation accuracy, Gafchromic EBT2 film was used to measure the 15 highest weighted apertures individually and as a composite from each of two paraspinal plans: one with and one without the edge penalty applied. Films were analyzed using a triple-channel nonuniformity correction and measurements were compared directly to calculations. Results: Apertures generated with the edge penalty were larger, more regularly shaped and required up to 30% fewer monitor units than those created without the edge penalty. Dose volume histogram analysis showed that the changes in doses to targets, organs at risk, and normal tissues were negligible. Edge penalty apertures that were measured with film for the paraspinal plan showed a notable decrease in the number of pixels disagreeing with calculation by more than 10%. For a 5% dose passing criterion, the number of pixels passing in the composite dose distributions for the non-edge penalty and edge penalty plans were 52% and 96%, respectively. Employing gamma with 3% dose/1 mm distance criteria resulted in a 79.5% (without penalty)/95.4% (with penalty) pass rate for the two plans. Gradient compensation of 3%/1 mm resulted in 83.3%/96.2% pass rates. Conclusions: The use of the edge penalty during optimization has the potential to markedly improve dose delivery accuracy for VMAT plans while still maintaining high quality optimized dose distributions. The penalty regularizes aperture shape and improves delivery efficiency. PMID:23127107
Svatos, M.; Zankowski, C.; Bednarz, B.
2016-01-01
Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead. PMID:27277051
Atlas-guided prostate intensity modulated radiation therapy (IMRT) planning.
Sheng, Yang; Li, Taoran; Zhang, You; Lee, W Robert; Yin, Fang-Fang; Ge, Yaorong; Wu, Q Jackie
2015-09-21
An atlas-based IMRT planning technique for prostate cancer was developed and evaluated. A multi-dose atlas was built based on the anatomy patterns of the patients, more specifically, the percent distance to the prostate and the concaveness angle formed by the seminal vesicles relative to the anterior-posterior axis. A 70-case dataset was classified using a k-medoids clustering analysis to recognize anatomy pattern variations in the dataset. The best classification, defined by the number of classes or medoids, was determined by the largest value of the average silhouette width. Reference plans from each class formed a multi-dose atlas. The atlas-guided planning (AGP) technique started with matching the new case anatomy pattern to one of the reference cases in the atlas; then a deformable registration between the atlas and new case anatomies transferred the dose from the atlas to the new case to guide inverse planning with full automation. 20 additional clinical cases were re-planned to evaluate the AGP technique. Dosimetric properties between AGP and clinical plans were evaluated. The classification analysis determined that the 5-case atlas would best represent anatomy patterns for the patient cohort. AGP took approximately 1 min on average (corresponding to 70 iterations of optimization) for all cases. When dosimetric parameters were compared, the differences between AGP and clinical plans were less than 3.5%, albeit some statistical significances observed: homogeneity index (p > 0.05), conformity index (p < 0.01), bladder gEUD (p < 0.01), and rectum gEUD (p = 0.02). Atlas-guided treatment planning is feasible and efficient. Atlas predicted dose can effectively guide the optimizer to achieve plan quality comparable to that of clinical plans.
Control and System Theory, Optimization, Inverse and Ill-Posed Problems
1988-09-14
Justlfleatlen Distribut ion/ Availability Codes # AFOSR-87-0350 Avat’ and/or1987-1988 Dist Special *CONTROL AND SYSTEM THEORY , ~ * OPTIMIZATION, * INVERSE...considerable va- riety of research investigations within the grant areas (Control and system theory , Optimization, and Ill-posed problems]. The
Dynamic gamma knife radiosurgery
NASA Astrophysics Data System (ADS)
Luan, Shuang; Swanson, Nathan; Chen, Zhe; Ma, Lijun
2009-03-01
Gamma knife has been the treatment of choice for various brain tumors and functional disorders. Current gamma knife radiosurgery is planned in a 'ball-packing' approach and delivered in a 'step-and-shoot' manner, i.e. it aims to 'pack' the different sized spherical high-dose volumes (called 'shots') into a tumor volume. We have developed a dynamic scheme for gamma knife radiosurgery based on the concept of 'dose-painting' to take advantage of the new robotic patient positioning system on the latest Gamma Knife C™ and Perfexion™ units. In our scheme, the spherical high dose volume created by the gamma knife unit will be viewed as a 3D spherical 'paintbrush', and treatment planning reduces to finding the best route of this 'paintbrush' to 'paint' a 3D tumor volume. Under our dose-painting concept, gamma knife radiosurgery becomes dynamic, where the patient moves continuously under the robotic positioning system. We have implemented a fully automatic dynamic gamma knife radiosurgery treatment planning system, where the inverse planning problem is solved as a traveling salesman problem combined with constrained least-square optimizations. We have also carried out experimental studies of dynamic gamma knife radiosurgery and showed the following. (1) Dynamic gamma knife radiosurgery is ideally suited for fully automatic inverse planning, where high quality radiosurgery plans can be obtained in minutes of computation. (2) Dynamic radiosurgery plans are more conformal than step-and-shoot plans and can maintain a steep dose gradient (around 13% per mm) between the target tumor volume and the surrounding critical structures. (3) It is possible to prescribe multiple isodose lines with dynamic gamma knife radiosurgery, so that the treatment can cover the periphery of the target volume while escalating the dose for high tumor burden regions. (4) With dynamic gamma knife radiosurgery, one can obtain a family of plans representing a tradeoff between the delivery time and the dose distributions, thus giving the clinician one more dimension of flexibility of choosing a plan based on the clinical situations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y. M., E-mail: ymingy@gmail.com; Bednarz, B.; Svatos, M.
Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship withinmore » a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balik, M; Rybak, M; Strongosky, M
2015-06-15
Purpose: This study investigates whether replanning each fraction for vaginal cuff HDR therapy using a multichannel cylinder (MC) and brachytherapy inverse optimization (BIO) provides dosimetric benefits to organs-at-risk (OAR). The goal was to appropriately cover the target and limit dose to OAR, as well as evaluate dosimetric changes for each fraction, while doing this in a timely and cost effective manner. Methods: From an initial selection of 57 patients that were treated with 3 fractions using a MC and BIO, a subset of n=12 patients was selected based on the criterion that one plan was used for all 3 fractions.more » A simulation CT was acquired prior to each fraction. CT scans for fractions 2 and 3 were fused to the initial CT. Contours for the bladder and rectum were manually drawn on CTs for all 3 fractions, and the clinical treatment volume (PTVeval) was defined. Cylinders were reconstructed using applicator modeling library, influencing time and cost effectiveness. Planning objectives were at least 95% prescription dose to 95% (D95%) of target volume and limiting high dose to OAR. Dose to 2 cm{sup 3} (D2cc) for each OAR was analyzed using a t-test. Results: This study concentrated on comparing 2cm{sup 3} of highest dose to OAR (D2cc), for each fraction for the plans that were used to treat all 3 fraction. Based on statistical analysis, using the initial plan for fractions 2 and 3 resulted in approximately 6% change to the highest D2cc of the bladder (p=0.03). Conclusion: Performing CT fusion and contours of each OAR on each fraction allows objective plan evaluation and supports decision making on the necessity of replanning based on improved dose sparing for OAR. Future studies will investigate the effects of replanning on maximum dose (D0.1cc) using the same physician-drawn OAR contours to avoid subjectivity.« less
Development of a residency program in radiation oncology physics: an inverse planning approach.
Khan, Rao F H; Dunscombe, Peter B
2016-03-08
Over the last two decades, there has been a concerted effort in North America to organize medical physicists' clinical training programs along more structured and formal lines. This effort has been prompted by the Commission on Accreditation of Medical Physics Education Programs (CAMPEP) which has now accredited about 90 residency programs. Initially the accreditation focused on standardized and higher quality clinical physics training; the development of rounded professionals who can function at a high level in a multidisciplinary environment was recognized as a priority of a radiation oncology physics residency only lately. In this report, we identify and discuss the implementation of, and the essential components of, a radiation oncology physics residency designed to produce knowledgeable and effective clinical physicists for today's safety-conscious and collaborative work environment. Our approach is that of inverse planning, by now familiar to all radiation oncology physicists, in which objectives and constraints are identified prior to the design of the program. Our inverse planning objectives not only include those associated with traditional residencies (i.e., clinical physics knowledge and critical clinical skills), but also encompass those other attributes essential for success in a modern radiation therapy clinic. These attributes include formal training in management skills and leadership, teaching and communication skills, and knowledge of error management techniques and patient safety. The constraints in our optimization exercise are associated with the limited duration of a residency and the training resources available. Without compromising the knowledge and skills needed for clinical tasks, we have successfully applied the model to the University of Calgary's two-year residency program. The program requires 3840 hours of overall commitment from the trainee, of which 7%-10% is spent in obtaining formal training in nontechnical "soft skills".
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghobadi, Kimia; Ghaffari, Hamid R.; Aleman, Dionne M.
2013-09-15
Purpose: The purpose of this work is to advance the two-step approach for Gamma Knife{sup ®} Perfexion™ (PFX) optimization to account for dose homogeneity and overlap between the planning target volume (PTV) and organs-at-risk (OARs).Methods: In the first step, a geometry-based algorithm is used to quickly select isocentre locations while explicitly accounting for PTV-OARs overlaps. In this approach, the PTV is divided into subvolumes based on the PTV-OARs overlaps and the distance of voxels to the overlaps. Only a few isocentres are selected in the overlap volume, and a higher number of isocentres are carefully selected among voxels that aremore » immediately close to the overlap volume. In the second step, a convex optimization is solved to find the optimal combination of collimator sizes and their radiation duration for each isocentre location.Results: This two-step approach is tested on seven clinical cases (comprising 11 targets) for which the authors assess coverage, OARs dose, and homogeneity index and relate these parameters to the overlap fraction for each case. In terms of coverage, the mean V{sub 99} for the gross target volume (GTV) was 99.8% while the V{sub 95} for the PTV averaged at 94.6%, thus satisfying the clinical objectives of 99% for GTV and 95% for PTV, respectively. The mean relative dose to the brainstem was 87.7% of the prescription dose (with maximum 108%), while on average, 11.3% of the PTV overlapped with the brainstem. The mean beam-on time per fraction per dose was 8.6 min with calibration dose rate of 3.5 Gy/min, and the computational time averaged at 205 min. Compared with previous work involving single-fraction radiosurgery, the resulting plans were more homogeneous with average homogeneity index of 1.18 compared to 1.47.Conclusions: PFX treatment plans with homogeneous dose distribution can be achieved by inverse planning using geometric isocentre selection and mathematical modeling and optimization techniques. The quality of the obtained treatment plans are clinically satisfactory while the homogeneity index is improved compared to conventional PFX plans.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boutilier, J; Chan, T; Lee, T
2014-06-15
Purpose: To develop a statistical model that predicts optimization objective function weights from patient geometry for intensity-modulation radiotherapy (IMRT) of prostate cancer. Methods: A previously developed inverse optimization method (IOM) is applied retrospectively to determine optimal weights for 51 treated patients. We use an overlap volume ratio (OVR) of bladder and rectum for different PTV expansions in order to quantify patient geometry in explanatory variables. Using the optimal weights as ground truth, we develop and train a logistic regression (LR) model to predict the rectum weight and thus the bladder weight. Post hoc, we fix the weights of the leftmore » femoral head, right femoral head, and an artificial structure that encourages conformity to the population average while normalizing the bladder and rectum weights accordingly. The population average of objective function weights is used for comparison. Results: The OVR at 0.7cm was found to be the most predictive of the rectum weights. The LR model performance is statistically significant when compared to the population average over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and mean voxel dose to the bladder, rectum, CTV, and PTV. On average, the LR model predicted bladder and rectum weights that are both 63% closer to the optimal weights compared to the population average. The treatment plans resulting from the LR weights have, on average, a rectum V70Gy that is 35% closer to the clinical plan and a bladder V70Gy that is 43% closer. Similar results are seen for bladder V54Gy and rectum V54Gy. Conclusion: Statistical modelling from patient anatomy can be used to determine objective function weights in IMRT for prostate cancer. Our method allows the treatment planners to begin the personalization process from an informed starting point, which may lead to more consistent clinical plans and reduce overall planning time.« less
Performance evaluation of the inverse dynamics method for optimal spacecraft reorientation
NASA Astrophysics Data System (ADS)
Ventura, Jacopo; Romano, Marcello; Walter, Ulrich
2015-05-01
This paper investigates the application of the inverse dynamics in the virtual domain method to Euler angles, quaternions, and modified Rodrigues parameters for rapid optimal attitude trajectory generation for spacecraft reorientation maneuvers. The impact of the virtual domain and attitude representation is numerically investigated for both minimum time and minimum energy problems. Owing to the nature of the inverse dynamics method, it yields sub-optimal solutions for minimum time problems. Furthermore, the virtual domain improves the optimality of the solution, but at the cost of more computational time. The attitude representation also affects solution quality and computational speed. For minimum energy problems, the optimal solution can be obtained without the virtual domain with any considered attitude representation.
NASA Astrophysics Data System (ADS)
Cai, Xiushan; Meng, Lingxin; Zhang, Wei; Liu, Leipo
2018-03-01
We establish robustness of the predictor feedback control law to perturbations appearing at the system input for affine nonlinear systems with time-varying input delay and additive disturbances. Furthermore, it is shown that it is inverse optimal with respect to a differential game problem. All of the stability and inverse optimality proofs are based on the infinite-dimensional backstepping transformation and an appropriate Lyapunov functional. A single-link manipulator subject to input delays and disturbances is given to illustrate the validity of the proposed method.
Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.
2008-05-01
The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, H; Dong, P; Xing, L
Purpose: Traditional radiotherapy inverse planning relies on the weighting factors to phenomenologically balance the conflicting criteria for different structures. The resulting manual trial-and-error determination of the weights has long been recognized as the most time-consuming part of treatment planning. The purpose of this work is to develop an inverse planning framework that parameterizes the inter-structural dosimetric tradeoff among with physically more meaningful quantities to simplify the search for a clinically sensible plan. Methods: A permissible dosimetric uncertainty is introduced for each of the structures to balance their conflicting dosimetric requirements. The inverse planning is then formulated as a convex feasibilitymore » problem, which aims to generate plans with acceptable dosimetric uncertainties. A sequential procedure (SP) is derived to decompose the model into three submodels to constrain the uncertainty in the planning target volume (PTV), the critical structures, and all other structures to spare, sequentially. The proposed technique is applied to plan a liver case and a head-and-neck case and compared with a conventional approach. Results: Our results show that the strategy is able to generate clinically sensible plans with little trial-and-error. In the case of liver IMRT, the fractional volumes to liver and heart above 20Gy are found to be 22% and 10%, respectively, which are 15.1% and 33.3% lower than that of the counterpart conventional plan while maintaining the same PTV coverage. The planning of the head and neck IMRT show the same level of success, with the DVHs for all organs at risk and PTV very competitive to a counterpart plan. Conclusion: A new inverse planning framework has been established. With physically more meaningful modeling of the inter-structural tradeoff, the technique enables us to substantially reduce the need for trial-and-error adjustment of the model parameters and opens new opportunities of incorporating prior knowledge to facilitate the treatment planning process.« less
Mavroidis, Panayiotis; Katsilieri, Zaira; Kefala, Vasiliki; Milickovic, Natasa; Papanikolaou, Nikos; Karabis, Andreas; Zamboglou, Nikolaos; Baltas, Dimos
2010-09-01
One of the issues that a planner is often facing in HDR brachytherapy is the selective existence of high dose volumes around some few dominating dwell positions. If there is no information available about its necessity (e.g. location of a GTV), then it is reasonable to investigate whether this can be avoided. This effect can be eliminated by limiting the free modulation of the dwell times. HIPO, an inverse treatment plan optimization algorithm, offers this option. In treatment plan optimization there are various methods that try to regularize the variation of dose non-uniformity using purely dosimetric measures. However, although these methods can help in finding a good dose distribution they do not provide any information regarding the expected treatment outcome as described by radiobiology based indices. The quality of 12 clinical HDR brachytherapy implants for prostate utilizing HIPO and modulation restriction (MR) has been compared to alternative plans with HIPO and free modulation (without MR). All common dose-volume indices for the prostate and the organs at risk have been considered together with radiobiological measures. The clinical effectiveness of the different dose distributions was investigated by calculating the response probabilities of the tumors and organs-at-risk (OARs) involved in these prostate cancer cases. The radiobiological models used are the Poisson and the relative seriality models. Furthermore, the complication-free tumor control probability, P + and the biologically effective uniform dose ([Formula: see text]) were used for treatment plan evaluation and comparison. Our results demonstrate that HIPO with a modulation restriction value of 0.1-0.2 delivers high quality plans which are practically equivalent to those achieved with free modulation regarding the clinically used dosimetric indices. In the comparison, many of the dosimetric and radiobiological indices showed significantly different results. The modulation restricted clinical plans demonstrated a lower total dwell time by a mean of 1.4% that was proved to be statistically significant ( p = 0.002). The HIPO with MR treatment plans produced a higher P + by 0.5%, which stemmed from a better sparing of the OARs by 1.0%. Both the dosimetric and radiobiological comparison shows that the modulation restricted optimization gives on average similar results with the optimization without modulation restriction in the examined clinical cases. Concluding, based on our results, it appears that the applied dwell time regularization technique is expected to introduce a minor improvement in the effectiveness of the optimized HDR dose distributions.
Katsilieri, Zaira; Kefala, Vasiliki; Milickovic, Natasa; Papanikolaou, Nikos; Karabis, Andreas; Zamboglou, Nikolaos; Baltas, Dimos
2010-01-01
Purpose One of the issues that a planner is often facing in HDR brachytherapy is the selective existence of high dose volumes around some few dominating dwell positions. If there is no information available about its necessity (e.g. location of a GTV), then it is reasonable to investigate whether this can be avoided. This effect can be eliminated by limiting the free modulation of the dwell times. HIPO, an inverse treatment plan optimization algorithm, offers this option. In treatment plan optimization there are various methods that try to regularize the variation of dose non-uniformity using purely dosimetric measures. However, although these methods can help in finding a good dose distribution they do not provide any information regarding the expected treatment outcome as described by radiobiology based indices. Material and methods The quality of 12 clinical HDR brachytherapy implants for prostate utilizing HIPO and modulation restriction (MR) has been compared to alternative plans with HIPO and free modulation (without MR). All common dose-volume indices for the prostate and the organs at risk have been considered together with radiobiological measures. The clinical effectiveness of the different dose distributions was investigated by calculating the response probabilities of the tumors and organs-at-risk (OARs) involved in these prostate cancer cases. The radiobiological models used are the Poisson and the relative seriality models. Furthermore, the complication-free tumor control probability, P+ and the biologically effective uniform dose (D¯¯) were used for treatment plan evaluation and comparison. Results Our results demonstrate that HIPO with a modulation restriction value of 0.1-0.2 delivers high quality plans which are practically equivalent to those achieved with free modulation regarding the clinically used dosimetric indices. In the comparison, many of the dosimetric and radiobiological indices showed significantly different results. The modulation restricted clinical plans demonstrated a lower total dwell time by a mean of 1.4% that was proved to be statistically significant (p = 0.002). The HIPO with MR treatment plans produced a higher P+ by 0.5%, which stemmed from a better sparing of the OARs by 1.0%. Conclusions Both the dosimetric and radiobiological comparison shows that the modulation restricted optimization gives on average similar results with the optimization without modulation restriction in the examined clinical cases. Concluding, based on our results, it appears that the applied dwell time regularization technique is expected to introduce a minor improvement in the effectiveness of the optimized HDR dose distributions. PMID:27853473
Adaptive eigenspace method for inverse scattering problems in the frequency domain
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nahum, Uri
2017-02-01
A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.
Predicting objective function weights from patient anatomy in prostate IMRT treatment planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.
2013-12-15
Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less
Predicting objective function weights from patient anatomy in prostate IMRT treatment planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.
Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.
IPDO-2007: Inverse Problems, Design and Optimization Symposium
2007-08-01
Kanevce, G. H., Kanevce, Lj. P., and Mitrevski , V. B.), International Symposium on Inverse Problems, Design and Optimization (IPDO-2007), (eds...107 Gligor Kanevce Ljubica Kanevce Vangelce Mitrevski George Dulikravich 108 Gligor Kanevce Ljubica Kanevce Igor Andreevski George Dulikravich
Investigation of effective decision criteria for multiobjective optimization in IMRT.
Holdsworth, Clay; Stewart, Robert D; Kim, Minsun; Liao, Jay; Phillips, Mark H
2011-06-01
To investigate how using different sets of decision criteria impacts the quality of intensity modulated radiation therapy (IMRT) plans obtained by multiobjective optimization. A multiobjective optimization evolutionary algorithm (MOEA) was used to produce sets of IMRT plans. The MOEA consisted of two interacting algorithms: (i) a deterministic inverse planning optimization of beamlet intensities that minimizes a weighted sum of quadratic penalty objectives to generate IMRT plans and (ii) an evolutionary algorithm that selects the superior IMRT plans using decision criteria and uses those plans to determine the new weights and penalty objectives of each new plan. Plans resulting from the deterministic algorithm were evaluated by the evolutionary algorithm using a set of decision criteria for both targets and organs at risk (OARs). Decision criteria used included variation in the target dose distribution, mean dose, maximum dose, generalized equivalent uniform dose (gEUD), an equivalent uniform dose (EUD(alpha,beta) formula derived from the linear-quadratic survival model, and points on dose volume histograms (DVHs). In order to quantatively compare results from trials using different decision criteria, a neutral set of comparison metrics was used. For each set of decision criteria investigated, IMRT plans were calculated for four different cases: two simple prostate cases, one complex prostate Case, and one complex head and neck Case. When smaller numbers of decision criteria, more descriptive decision criteria, or less anti-correlated decision criteria were used to characterize plan quality during multiobjective optimization, dose to OARs and target dose variation were reduced in the final population of plans. Mean OAR dose and gEUD (a = 4) decision criteria were comparable. Using maximum dose decision criteria for OARs near targets resulted in inferior populations that focused solely on low target variance at the expense of high OAR dose. Target dose range, (D(max) - D(min)), decision criteria were found to be most effective for keeping targets uniform. Using target gEUD decision criteria resulted in much lower OAR doses but much higher target dose variation. EUD(alpha,beta) based decision criteria focused on a region of plan space that was a compromise between target and OAR objectives. None of these target decision criteria dominated plans using other criteria, but only focused on approaching a different area of the Pareto front. The choice of decision criteria implemented in the MOEA had a significant impact on the region explored and the rate of convergence toward the Pareto front. When more decision criteria, anticorrelated decision criteria, or decision criteria with insufficient information were implemented, inferior populations are resulted. When more informative decision criteria were used, such as gEUD, EUD(alpha,beta), target dose range, and mean dose, MOEA optimizations focused on approaching different regions of the Pareto front, but did not dominate each other. Using simple OAR decision criteria and target EUD(alpha,beta) decision criteria demonstrated the potential to generate IMRT plans that significantly reduce dose to OARs while achieving the same or better tumor control when clinical requirements on target dose variance can be met or relaxed.
NASA Astrophysics Data System (ADS)
Wells, Kelley C.; Millet, Dylan B.; Bousserez, Nicolas; Henze, Daven K.; Griffis, Timothy J.; Chaliyakunnel, Sreelekha; Dlugokencky, Edward J.; Saikawa, Eri; Xiang, Gao; Prinn, Ronald G.; O'Doherty, Simon; Young, Dickon; Weiss, Ray F.; Dutton, Geoff S.; Elkins, James W.; Krummel, Paul B.; Langenfelds, Ray; Steele, L. Paul
2018-01-01
We present top-down constraints on global monthly N2O emissions for 2011 from a multi-inversion approach and an ensemble of surface observations. The inversions employ the GEOS-Chem adjoint and an array of aggregation strategies to test how well current observations can constrain the spatial distribution of global N2O emissions. The strategies include (1) a standard 4D-Var inversion at native model resolution (4° × 5°), (2) an inversion for six continental and three ocean regions, and (3) a fast 4D-Var inversion based on a novel dimension reduction technique employing randomized singular value decomposition (SVD). The optimized global flux ranges from 15.9 Tg N yr-1 (SVD-based inversion) to 17.5-17.7 Tg N yr-1 (continental-scale, standard 4D-Var inversions), with the former better capturing the extratropical N2O background measured during the HIAPER Pole-to-Pole Observations (HIPPO) airborne campaigns. We find that the tropics provide a greater contribution to the global N2O flux than is predicted by the prior bottom-up inventories, likely due to underestimated agricultural and oceanic emissions. We infer an overestimate of natural soil emissions in the extratropics and find that predicted emissions are seasonally biased in northern midlatitudes. Here, optimized fluxes exhibit a springtime peak consistent with the timing of spring fertilizer and manure application, soil thawing, and elevated soil moisture. Finally, the inversions reveal a major emission underestimate in the US Corn Belt in the bottom-up inventory used here. We extensively test the impact of initial conditions on the analysis and recommend formally optimizing the initial N2O distribution to avoid biasing the inferred fluxes. We find that the SVD-based approach provides a powerful framework for deriving emission information from N2O observations: by defining the optimal resolution of the solution based on the information content of the inversion, it provides spatial information that is lost when aggregating to political or geographic regions, while also providing more temporal information than a standard 4D-Var inversion.
NASA Astrophysics Data System (ADS)
Eimori, Takahisa; Anami, Kenji; Yoshimatsu, Norifumi; Hasebe, Tetsuya; Murakami, Kazuaki
2014-01-01
A comprehensive design optimization methodology using intuitive nondimensional parameters of inversion-level and saturation-level is proposed, especially for ultralow-power, low-voltage, and high-performance analog circuits with mixed strong, moderate, and weak inversion metal-oxide-semiconductor transistor (MOST) operations. This methodology is based on the synthesized charge-based MOST model composed of Enz-Krummenacher-Vittoz (EKV) basic concepts and advanced-compact-model (ACM) physics-based equations. The key concept of this methodology is that all circuit and system characteristics are described as some multivariate functions of inversion-level parameters, where the inversion level is used as an independent variable representative of each MOST. The analog circuit design starts from the first step of inversion-level design using universal characteristics expressed by circuit currents and inversion-level parameters without process-dependent parameters, followed by the second step of foundry-process-dependent design and the last step of verification using saturation-level criteria. This methodology also paves the way to an intuitive and comprehensive design approach for many kinds of analog circuit specifications by optimization using inversion-level log-scale diagrams and saturation-level criteria. In this paper, we introduce an example of our design methodology for a two-stage Miller amplifier.
Dosimetric advantages of IMPT over IMRT for laser-accelerated proton beams
NASA Astrophysics Data System (ADS)
Luo, W.; Li, J.; Fourkal, E.; Fan, J.; Xu, X.; Chen, Z.; Jin, L.; Price, R.; Ma, C.-M.
2008-12-01
As a clinical application of an exciting scientific breakthrough, a compact and cost-efficient proton therapy unit using high-power laser acceleration is being developed at Fox Chase Cancer Center. The significance of this application depends on whether or not it can yield dosimetric superiority over intensity-modulated radiation therapy (IMRT). The goal of this study is to show how laser-accelerated proton beams with broad energy spreads can be optimally used for proton therapy including intensity-modulated proton therapy (IMPT) and achieve dosimetric superiority over IMRT for prostate cancer. Desired energies and spreads with a varying δE/E were selected with the particle selection device and used to generate spread-out Bragg peaks (SOBPs). Proton plans were generated on an in-house Monte Carlo-based inverse-planning system. Fifteen prostate IMRT plans previously used for patient treatment have been included for comparison. Identical dose prescriptions, beam arrangement and consistent dose constrains were used for IMRT and IMPT plans to show the dosimetric differences that were caused only by the different physical characteristics of proton and photon beams. Different optimization constrains and beam arrangements were also used to find optimal IMPT. The results show that conventional proton therapy (CPT) plans without intensity modulation were not superior to IMRT, but IMPT can generate better proton plans if appropriate beam setup and optimization are used. Compared to IMRT, IMPT can reduce the target dose heterogeneity ((D5-D95)/D95) by up to 56%. The volume receiving 65 Gy and higher (V65) for the bladder and the rectum can be reduced by up to 45% and 88%, respectively, while the volume receiving 40 Gy and higher (V40) for the bladder and the rectum can be reduced by up to 49% and 68%, respectively. IMPT can also reduce the whole body non-target tissue dose by up to 61% or a factor 2.5. This study has shown that the laser accelerator under development has a potential to generate high-quality proton beams for cancer treatment. Significant improvement in target dose uniformity and normal tissue sparing as well as in reduction of whole body dose can be achieved by IMPT with appropriate optimization and beam setup.
Planning hybrid intensity modulated radiation therapy for whole-breast irradiation.
Farace, Paolo; Zucca, Sergio; Solla, Ignazio; Fadda, Giuseppina; Durzu, Silvia; Porru, Sergio; Meleddu, Gianfranco; Deidda, Maria Assunta; Possanzini, Marco; Orrù, Sivia; Lay, Giancarlo
2012-09-01
To test tangential and not-tangential hybrid intensity modulated radiation therapy (IMRT) for whole-breast irradiation. Seventy-eight (36 right-, 42 left-) breast patients were randomly selected. Hybrid IMRT was performed by direct aperture optimization. A semiautomated method for planning hybrid IMRT was implemented using Pinnacle scripts. A plan optimization volume (POV), defined as the portion of the planning target volume covered by the open beams, was used as the target objective during inverse planning. Treatment goals were to prescribe a minimum dose of 47.5 Gy to greater than 90% of the POV and to minimize the POV and/or normal tissue receiving a dose greater than 107%. When treatment goals were not achieved by using a 4-field technique (2 conventional open plus 2 IMRT tangents), a 6-field technique was applied, adding 2 non tangential (anterior-oblique) IMRT beams. Using scripts, manual procedures were minimized (choice of optimal beam angle, setting monitor units for open tangentials, and POV definition). Treatment goals were achieved by using the 4-field technique in 61 of 78 (78%) patients. The 6-field technique was applied in the remaining 17 of 78 (22%) patients, allowing for significantly better achievement of goals, at the expense of an increase of low-dose (∼5 Gy) distribution in the contralateral tissue, heart, and lungs but with no significant increase of higher doses (∼20 Gy) in heart and lungs. The mean monitor unit contribution to IMRT beams was significantly greater (18.7% vs 9.9%) in the group of patients who required 6-field procedure. Because hybrid IMRT can be performed semiautomatically, it can be planned for a large number of patients with little impact on human or departmental resources, promoting it as the standard practice for whole-breast irradiation. Copyright © 2012 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, E; Hoppe, R; Million, L
2015-06-15
Purpose: Integration of coordinated robotic table motion with inversely-planned arc delivery has the potential to resolve table-top delivery limitations of large-field treatments such as Total Body Irradiation (TBI), Total Lymphoid Irradiation (TLI), and Cranial-Spinal Irradiation (CSI). We formulate the foundation for Trajectory Modulated Arc Therapy (TMAT), and using Varian Developer Mode capabilities, experimentally investigate its practical implementation for such techniques. Methods: A MATLAB algorithm was developed for inverse planning optimization of the table motion, MLC positions, and gantry motion under extended-SSD geometry. To maximize the effective field size, delivery trajectories for TMAT TBI were formed with the table rotated atmore » 270° IEC and dropped vertically to 152.5cm SSD. Preliminary testing of algorithm parameters was done through retrospective planning analysis. Robotic delivery was programmed using custom XML scripting on the TrueBeam Developer Mode platform. Final dose was calculated using the Eclipse AAA algorithm. Initial verification of delivery accuracy was measured using OSLDs on a solid water phantom of varying thickness. Results: A comparison of DVH curves demonstrated that dynamic couch motion irradiation was sufficiently approximated by static control points spaced in intervals of less than 2cm. Optimized MLC motion decreased the average lung dose to 68.5% of the prescription dose. The programmed irradiation integrating coordinated table motion was deliverable on a TrueBeam STx linac in 6.7 min. With the couch translating under an open 10cmx20cm field angled at 10°, OSLD measurements along the midline of a solid water phantom at depths of 3, 5, and 9cm were within 3% of the TPS AAA algorithm with an average deviation of 1.2%. Conclusion: A treatment planning and delivery system for Trajectory Modulated Arc Therapy of extended volumes has been established and experimentally demonstrated for TBI. Extension to other treatment techniques such as TLI and CSI is readily achievable through the developed platform. Grant Funding by Varian Medical Systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Connell, T; Papaconstadopoulos, P; Alexander, A
2014-08-15
Modulated electron radiation therapy (MERT) offers the potential to improve healthy tissue sparing through increased dose conformity. Challenges remain, however, in accurate beamlet dose calculation, plan optimization, collimation method and delivery accuracy. In this work, we investigate the accuracy and efficiency of an end-to-end MERT plan and automated-delivery workflow for the electron boost portion of a previously treated whole breast irradiation case. Dose calculations were performed using Monte Carlo methods and beam weights were determined using a research-based treatment planning system capable of inverse optimization. The plan was delivered to radiochromic film placed in a water equivalent phantom for verification,more » using an automated motorized tertiary collimator. The automated delivery, which covered 4 electron energies, 196 subfields and 6183 total MU was completed in 25.8 minutes, including 6.2 minutes of beam-on time with the remainder of the delivery time spent on collimator leaf motion and the automated interfacing with the accelerator in service mode. The delivery time could be reduced by 5.3 minutes with minor electron collimator modifications and the beam-on time could be reduced by and estimated factor of 2–3 through redesign of the scattering foils. Comparison of the planned and delivered film dose gave 3%/3 mm gamma pass rates of 62.1, 99.8, 97.8, 98.3, and 98.7 percent for the 9, 12, 16, 20 MeV, and combined energy deliveries respectively. Good results were also seen in the delivery verification performed with a MapCHECK 2 device. The results showed that accurate and efficient MERT delivery is possible with current technologies.« less
Comparison of Optimal Design Methods in Inverse Problems
Banks, H. T.; Holm, Kathleen; Kappel, Franz
2011-01-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762
Optimized emission in nanorod arrays through quasi-aperiodic inverse design.
Anderson, P Duke; Povinelli, Michelle L
2015-06-01
We investigate a new class of quasi-aperiodic nanorod structures for the enhancement of incoherent light emission. We identify one optimized structure using an inverse design algorithm and the finite-difference time-domain method. We carry out emission calculations on both the optimized structure as well as a simple periodic array. The optimized structure achieves nearly perfect light extraction while maintaining a high spontaneous emission rate. Overall, the optimized structure can achieve a 20%-42% increase in external quantum efficiency relative to a simple periodic design, depending on material quality.
NASA Astrophysics Data System (ADS)
Alexander, Andrew William
Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and optimization algorithms are demonstrated. We investigated the clinical significance of MERT on spinal irradiation, breast boost irradiation, and a head and neck sarcoma cancer site using several parameters to analyze the treatment plans. Finally, we investigated the idea of mixed beam photon and electron treatment planning. Photon optimization treatment planning tools were included within the MERT planning toolkit for the purpose of mixed beam optimization. In conclusion, this thesis work has resulted in the development of an advanced framework for photon and electron Monte Carlo treatment planning studies and the development of an inverse planning system for photon, electron or mixed beam radiotherapy (MBRT). The justification and validation of this work is found within the results of the planning studies, which have demonstrated dosimetric advantages to using MERT or MBRT in comparison to clinical treatment alternatives.
Toushmalani, Reza
2013-01-01
The purpose of this study was to compare the performance of two methods for gravity inversion of a fault. First method [Particle swarm optimization (PSO)] is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. Second method [The Levenberg-Marquardt algorithm (LM)] is an approximation to the Newton method used also for training ANNs. In this paper first we discussed the gravity field of a fault, then describes the algorithms of PSO and LM And presents application of Levenberg-Marquardt algorithm, and a particle swarm algorithm in solving inverse problem of a fault. Most importantly the parameters for the algorithms are given for the individual tests. Inverse solution reveals that fault model parameters are agree quite well with the known results. A more agreement has been found between the predicted model anomaly and the observed gravity anomaly in PSO method rather than LM method.
NASA Astrophysics Data System (ADS)
Padhi, Amit; Mallick, Subhashis
2014-03-01
Inversion of band- and offset-limited single component (P wave) seismic data does not provide robust estimates of subsurface elastic parameters and density. Multicomponent seismic data can, in principle, circumvent this limitation but adds to the complexity of the inversion algorithm because it requires simultaneous optimization of multiple objective functions, one for each data component. In seismology, these multiple objectives are typically handled by constructing a single objective given as a weighted sum of the objectives of individual data components and sometimes with additional regularization terms reflecting their interdependence; which is then followed by a single objective optimization. Multi-objective problems, inclusive of the multicomponent seismic inversion are however non-linear. They have non-unique solutions, known as the Pareto-optimal solutions. Therefore, casting such problems as a single objective optimization provides one out of the entire set of the Pareto-optimal solutions, which in turn, may be biased by the choice of the weights. To handle multiple objectives, it is thus appropriate to treat the objective as a vector and simultaneously optimize each of its components so that the entire Pareto-optimal set of solutions could be estimated. This paper proposes such a novel multi-objective methodology using a non-dominated sorting genetic algorithm for waveform inversion of multicomponent seismic data. The applicability of the method is demonstrated using synthetic data generated from multilayer models based on a real well log. We document that the proposed method can reliably extract subsurface elastic parameters and density from multicomponent seismic data both when the subsurface is considered isotropic and transversely isotropic with a vertical symmetry axis. We also compute approximate uncertainty values in the derived parameters. Although we restrict our inversion applications to horizontally stratified models, we outline a practical procedure of extending the method to approximately include local dips for each source-receiver offset pair. Finally, the applicability of the proposed method is not just limited to seismic inversion but it could be used to invert different data types not only requiring multiple objectives but also multiple physics to describe them.
Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate
NASA Astrophysics Data System (ADS)
Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.
2008-08-01
The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.
NASA Astrophysics Data System (ADS)
Chen, Z.; Chen, J.; Zheng, X.; Jiang, F.; Zhang, S.; Ju, W.; Yuan, W.; Mo, G.
2014-12-01
In this study, we explore the feasibility of optimizing ecosystem photosynthetic and respiratory parameters from the seasonal variation pattern of the net carbon flux. An optimization scheme is proposed to estimate two key parameters (Vcmax and Q10) by exploiting the seasonal variation in the net ecosystem carbon flux retrieved by an atmospheric inversion system. This scheme is implemented to estimate Vcmax and Q10 of the Boreal Ecosystem Productivity Simulator (BEPS) to improve its NEP simulation in the Boreal North America (BNA) region. Simultaneously, in-situ NEE observations at six eddy covariance sites are used to evaluate the NEE simulations. The results show that the performance of the optimized BEPS is superior to that of the BEPS with the default parameter values. These results have the implication on using atmospheric CO2 data for optimizing ecosystem parameters through atmospheric inversion or data assimilation techniques.
Carl, Michael; Bydder, Graeme M; Du, Jiang
2016-08-01
The long repetition time and inversion time with inversion recovery preparation ultrashort echo time (UTE) often causes prohibitively long scan times. We present an optimized method for long T2 signal suppression in which several k-space spokes are acquired after each inversion preparation. Using Bloch equations the sequence parameters such as TI and flip angle were optimized to suppress the long T2 water and fat signals and to maximize short T2 contrast. Volunteer imaging was performed on a healthy male volunteer. Inversion recovery preparation was performed using a Silver-Hoult adiabatic inversion pulse together with a three-dimensional (3D) UTE (3D Cones) acquisition. The theoretical signal curves generally agreed with the experimentally measured region of interest curves. The multispoke inversion recovery method showed good muscle and fatty bone marrow suppression, and highlighted short T2 signals such as these from the femoral and tibial cortex. Inversion recovery 3D UTE imaging with multiple spoke acquisitions can be used to effectively suppress long T2 signals and highlight short T2 signals within clinical scan times. Theoretical modeling can be used to determine sequence parameters to optimize long T2 signal suppression and maximize short T2 signals. Experimental results on a volunteer confirmed the theoretical predictions. Magn Reson Med 76:577-582, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Inexact trajectory planning and inverse problems in the Hamilton–Pontryagin framework
Burnett, Christopher L.; Holm, Darryl D.; Meier, David M.
2013-01-01
We study a trajectory-planning problem whose solution path evolves by means of a Lie group action and passes near a designated set of target positions at particular times. This is a higher-order variational problem in optimal control, motivated by potential applications in computational anatomy and quantum control. Reduction by symmetry in such problems naturally summons methods from Lie group theory and Riemannian geometry. A geometrically illuminating form of the Euler–Lagrange equations is obtained from a higher-order Hamilton–Pontryagin variational formulation. In this context, the previously known node equations are recovered with a new interpretation as Legendre–Ostrogradsky momenta possessing certain conservation properties. Three example applications are discussed as well as a numerical integration scheme that follows naturally from the Hamilton–Pontryagin principle and preserves the geometric properties of the continuous-time solution. PMID:24353467
NASA Astrophysics Data System (ADS)
Davis, A. D.; Huan, X.; Heimbach, P.; Marzouk, Y.
2017-12-01
Borehole data are essential for calibrating ice sheet models. However, field expeditions for acquiring borehole data are often time-consuming, expensive, and dangerous. It is thus essential to plan the best sampling locations that maximize the value of data while minimizing costs and risks. We present an uncertainty quantification (UQ) workflow based on rigorous probability framework to achieve these objectives. First, we employ an optimal experimental design (OED) procedure to compute borehole locations that yield the highest expected information gain. We take into account practical considerations of location accessibility (e.g., proximity to research sites, terrain, and ice velocity may affect feasibility of drilling) and robustness (e.g., real-time constraints such as weather may force researchers to drill at sub-optimal locations near those originally planned), by incorporating a penalty reflecting accessibility as well as sensitivity to deviations from the optimal locations. Next, we extract vertical temperature profiles from these boreholes and formulate a Bayesian inverse problem to reconstruct past surface temperatures. Using a model of temperature advection/diffusion, the top boundary condition (corresponding to surface temperatures) is calibrated via efficient Markov chain Monte Carlo (MCMC). The overall procedure can then be iterated to choose new optimal borehole locations for the next expeditions.Through this work, we demonstrate powerful UQ methods for designing experiments, calibrating models, making predictions, and assessing sensitivity--all performed under an uncertain environment. We develop a theoretical framework as well as practical software within an intuitive workflow, and illustrate their usefulness for combining data and models for environmental and climate research.
The treatment of extensive scalp lesions combining electrons with intensity-modulated photons.
Chan, Maria F; Song, Yulin; Burman, Chandra; Chui, Chen S; Schupak, Karen
2006-01-01
This study was to investigate the feasibility and potential benefits of combining electrons with intensity modulated photons (IMRT+e) for patients with extensive scalp lesions. A case of a patient with an extensive scalp lesion, in which the target volume covered the entire front half of the scalp, is presented. This approach incorporated the electron dose into the inverse treatment planning optimization. The resulting doses to the planning target volume (PTV) and relevant critical structures were compared. Thermoluminescent dosimeters (TLD), diodes, and GAFCHROMIC EBT films were used to verify the accuracy of the techniques. The IMRT+e plan produced a superior dose distribution to the patient as compared to the IMRT plan in terms of reduction of the dose to the brain with the same dose conformity and homogeneity in the target volumes. This study showed that IMRT+e is a viable treatment modality for extensive scalp lesions patients. It provides a feasible alternative to existing treatment techniques, resulting in improved homogeneity of dose to the PTV compared to conventional electron techniques and a decrease in dose to the brain compared to photon IMRT alone.
NASA Astrophysics Data System (ADS)
Reiter, D. T.; Rodi, W. L.
2015-12-01
Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.
A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn; Lin, Guang, E-mail: lin491@purdue.edu; Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352
2015-09-01
In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by threemore » steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.« less
NASA Astrophysics Data System (ADS)
Rocha, Humberto; Dias, Joana M.; Ferreira, Brígida C.; Lopes, Maria C.
2013-05-01
Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem.
Inverse problems in the design, modeling and testing of engineering systems
NASA Technical Reports Server (NTRS)
Alifanov, Oleg M.
1991-01-01
Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.
Optimal inverse functions created via population-based optimization.
Jennings, Alan L; Ordóñez, Raúl
2014-06-01
Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.
Action Understanding as Inverse Planning
ERIC Educational Resources Information Center
Baker, Chris L.; Saxe, Rebecca; Tenenbaum, Joshua B.
2009-01-01
Humans are adept at inferring the mental states underlying other agents' actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theory of intentional agents' behavior based on the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Penfold, S; Casiraghi, M; Dou, T
2015-06-15
Purpose: To investigate the applicability of feasibility-seeking cyclic orthogonal projections to the field of intensity modulated proton therapy (IMPT) inverse planning. Feasibility of constraints only, as opposed to optimization of a merit function, is less demanding algorithmically and holds a promise of parallel computations capability with non-cyclic orthogonal projections algorithms such as string-averaging or block-iterative strategies. Methods: A virtual 2D geometry was designed containing a C-shaped planning target volume (PTV) surrounding an organ at risk (OAR). The geometry was pixelized into 1 mm pixels. Four beams containing a subset of proton pencil beams were simulated in Geant4 to provide themore » system matrix A whose elements a-ij correspond to the dose delivered to pixel i by a unit intensity pencil beam j. A cyclic orthogonal projections algorithm was applied with the goal of finding a pencil beam intensity distribution that would meet the following dose requirements: D-OAR < 54 Gy and 57 Gy < D-PTV < 64.2 Gy. The cyclic algorithm was based on the concept of orthogonal projections onto half-spaces according to the Agmon-Motzkin-Schoenberg algorithm, also known as ‘ART for inequalities’. Results: The cyclic orthogonal projections algorithm resulted in less than 5% of the PTV pixels and less than 1% of OAR pixels violating their dose constraints, respectively. Because of the abutting OAR-PTV geometry and the realistic modelling of the pencil beam penumbra, complete satisfaction of the dose objectives was not achieved, although this would be a clinically acceptable plan for a meningioma abutting the brainstem, for example. Conclusion: The cyclic orthogonal projections algorithm was demonstrated to be an effective tool for inverse IMPT planning in the 2D test geometry described. We plan to further develop this linear algorithm to be capable of incorporating dose-volume constraints into the feasibility-seeking algorithm.« less
Adaptation of the CVT algorithm for catheter optimization in high dose rate brachytherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poulin, Eric; Fekete, Charles-Antoine Collins; Beaulieu, Luc
2013-11-15
Purpose: An innovative, simple, and fast method to optimize the number and position of catheters is presented for prostate and breast high dose rate (HDR) brachytherapy, both for arbitrary templates or template-free implants (such as robotic templates).Methods: Eight clinical cases were chosen randomly from a bank of patients, previously treated in our clinic to test our method. The 2D Centroidal Voronoi Tessellations (CVT) algorithm was adapted to distribute catheters uniformly in space, within the maximum external contour of the planning target volume. The catheters optimization procedure includes the inverse planning simulated annealing algorithm (IPSA). Complete treatment plans can then bemore » generated from the algorithm for different number of catheters. The best plan is chosen from different dosimetry criteria and will automatically provide the number of catheters and their positions. After the CVT algorithm parameters were optimized for speed and dosimetric results, it was validated against prostate clinical cases, using clinically relevant dose parameters. The robustness to implantation error was also evaluated. Finally, the efficiency of the method was tested in breast interstitial HDR brachytherapy cases.Results: The effect of the number and locations of the catheters on prostate cancer patients was studied. Treatment plans with a better or equivalent dose distributions could be obtained with fewer catheters. A better or equal prostate V100 was obtained down to 12 catheters. Plans with nine or less catheters would not be clinically acceptable in terms of prostate V100 and D90. Implantation errors up to 3 mm were acceptable since no statistical difference was found when compared to 0 mm error (p > 0.05). No significant difference in dosimetric indices was observed for the different combination of parameters within the CVT algorithm. A linear relation was found between the number of random points and the optimization time of the CVT algorithm. Because the computation time decrease with the number of points and that no effects were observed on the dosimetric indices when varying the number of sampling points and the number of iterations, they were respectively fixed to 2500 and to 100. The computation time to obtain ten complete treatments plans ranging from 9 to 18 catheters, with the corresponding dosimetric indices, was 90 s. However, 93% of the computation time is used by a research version of IPSA. For the breast, on average, the Radiation Therapy Oncology Group recommendations would be satisfied down to 12 catheters. Plans with nine or less catheters would not be clinically acceptable in terms of V100, dose homogeneity index, and D90.Conclusions: The authors have devised a simple, fast and efficient method to optimize the number and position of catheters in interstitial HDR brachytherapy. The method was shown to be robust for both prostate and breast HDR brachytherapy. More importantly, the computation time of the algorithm is acceptable for clinical use. Ultimately, this catheter optimization algorithm could be coupled with a 3D ultrasound system to allow real-time guidance and planning in HDR brachytherapy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boutilier, Justin J., E-mail: j.boutilier@mail.utoronto.ca; Lee, Taewoo; Craig, Tim
Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and appliedmore » three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR weight prediction methodologies perform comparably to the LR model and can produce clinical quality treatment plans by simultaneously predicting multiple weights that capture trade-offs associated with sparing multiple OARs.« less
Mineral inversion for element capture spectroscopy logging based on optimization theory
NASA Astrophysics Data System (ADS)
Zhao, Jianpeng; Chen, Hui; Yin, Lu; Li, Ning
2017-12-01
Understanding the mineralogical composition of a formation is an essential key step in the petrophysical evaluation of petroleum reservoirs. Geochemical logging tools can provide quantitative measurements of a wide range of elements. In this paper, element capture spectroscopy (ECS) was taken as an example and an optimization method was adopted to solve the mineral inversion problem for ECS. This method used the converting relationship between elements and minerals as response equations and took into account the statistical uncertainty of the element measurements and established an optimization function for ECS. Objective function value and reconstructed elemental logs were used to check the robustness and reliability of the inversion method. Finally, the inversion mineral results had a good agreement with x-ray diffraction laboratory data. The accurate conversion of elemental dry weights to mineral dry weights formed the foundation for the subsequent applications based on ECS.
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
Optimism, coping and long-term recovery from coronary artery surgery in women.
King, K B; Rowe, M A; Kimble, L P; Zerwic, J J
1998-02-01
Optimism, coping strategies, and psychological and functional outcomes were measured in 55 women undergoing coronary artery surgery. Data were collected in-hospital and at 1, 6, and 12 months after surgery. Optimism was related to positive moods and life satisfaction, and inversely related to negative moods. Few relationships were found between optimism and functional ability. Cognitive coping strategies accounted for a mediating effect between optimism and negative mood. Optimists were more likely to accept their situation, and less likely to use escapism. In turn, these coping strategies were inversely related to negative mood and mediated the relationship between optimism and this outcome. Optimism was not related to problem-focused coping strategies; this, these coping strategies cannot explain the relationship between optimism and outcomes.
NASA Astrophysics Data System (ADS)
McIntosh, Chris; Purdie, Thomas G.
2017-01-01
Automating the radiotherapy treatment planning process is a technically challenging problem. The majority of automated approaches have focused on customizing and inferring dose volume objectives to be used in plan optimization. In this work we outline a multi-patient atlas-based dose prediction approach that learns to predict the dose-per-voxel for a novel patient directly from the computed tomography planning scan without the requirement of specifying any objectives. Our method learns to automatically select the most effective atlases for a novel patient, and then map the dose from those atlases onto the novel patient. We extend our previous work to include a conditional random field for the optimization of a joint distribution prior that matches the complementary goals of an accurately spatially distributed dose distribution while still adhering to the desired dose volume histograms. The resulting distribution can then be used for inverse-planning with a new spatial dose objective, or to create typical dose volume objectives for the canonical optimization pipeline. We investigated six treatment sites (633 patients for training and 113 patients for testing) and evaluated the mean absolute difference in all DVHs for the clinical and predicted dose distribution. The results on average are favorable in comparison to our previous approach (1.91 versus 2.57). Comparing our method with and without atlas-selection further validates that atlas-selection improved dose prediction on average in whole breast (0.64 versus 1.59), prostate (2.13 versus 4.07), and rectum (1.46 versus 3.29) while it is less important in breast cavity (0.79 versus 0.92) and lung (1.33 versus 1.27) for which there is high conformity and minimal dose shaping. In CNS brain, atlas-selection has the potential to be impactful (3.65 versus 5.09), but selecting the ideal atlas is the most challenging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, V; Nguyen, D; Tran, A
Purpose: To develop and clinically implement 4π radiotherapy, an inverse optimization platform that maximally utilizes non-coplanar intensity modulated radiotherapy (IMRT) beams to significantly improve critical organ sparing. Methods: A 3D scanner was used to digitize the human and phantom subject surfaces, which were positioned in the computer assisted design (CAD) model of a TrueBeam machine to create a virtual geometrical model, based on which, the feasible beam space was calculated for different tumor locations. Beamlets were computed for all feasible beams using convolution/superposition. A column generation algorithm was employed to optimize patient specific beam orientations and fluence maps. Optimal routingmore » through all selected beams were calculated by a level set method. The resultant plans were converted to XML files and delivered to phantoms in the TrueBeam developer mode. Finally, 4π plans were recomputed in Eclipse and manually delivered to recurrent GBM patients. Results: Compared to IMRT utilizing manually selected beams and volumetric modulated arc therapy plans, markedly improved dosimetry was observed using 4π for the brain, head and neck, liver, lung, and prostate patients. The improvements were due to significantly improved conformality and reduced high dose spillage to organs mediolateral to the PTV. The virtual geometrical model was experimentally validated. Safety margins with 99.9% confidence in collision avoidance were included to the model based model accuracy estimates determined via 300 physical machine to phantom distance measurements. Automated delivery in the developer mode was completed in 10 minutes and collision free. Manual 4 π treatment on the GBM cases resulted in significant brainstem sparing and took 35–45 minutes including multiple images, which showed submillimeter cranial intrafractional motion. Conclusion: The mathematical modeling utilized in 4π is accurate to create and guide highly complex non-coplanar IMRT treatments that consistently and significantly outperform human-operator-created plans. Deliverability of such plans is clinically demonstrated. This work is funded by Varian Medical Systems and the NSF Graduate Research Fellowship DGE-1144087.« less
Development of a residency program in radiation oncology physics: an inverse planning approach
Dunscombe, Peter B.
2016-01-01
Over the last two decades, there has been a concerted effort in North America to organize medical physicists’ clinical training programs along more structured and formal lines. This effort has been prompted by the Commission on Accreditation of Medical Physics Education Programs (CAMPEP) which has now accredited about 90 residency programs. Initially the accreditation focused on standardized and higher quality clinical physics training; the development of rounded professionals who can function at a high level in a multidisciplinary environment was recognized as a priority of a radiation oncology physics residency only lately. In this report, we identify and discuss the implementation of, and the essential components of, a radiation oncology physics residency designed to produce knowledgeable and effective clinical physicists for today's safety‐conscious and collaborative work environment. Our approach is that of inverse planning, by now familiar to all radiation oncology physicists, in which objectives and constraints are identified prior to the design of the program. Our inverse planning objectives not only include those associated with traditional residencies (i.e., clinical physics knowledge and critical clinical skills), but also encompass those other attributes essential for success in a modern radiation therapy clinic. These attributes include formal training in management skills and leadership, teaching and communication skills, and knowledge of error management techniques and patient safety. The constraints in our optimization exercise are associated with the limited duration of a residency and the training resources available. Without compromising the knowledge and skills needed for clinical tasks, we have successfully applied the model to the University of Calgary's two‐year residency program. The program requires 3840 hours of overall commitment from the trainee, of which 7%–10% is spent in obtaining formal training in nontechnical “soft skills”. PACS number(s): 01.40 Di, 01.40.gb, 87.10‐e PMID:27074469
2009-06-01
imagining) into the HDR brachytherapy treatment planning has been demonstrated. Using the inverse planning program IPSA , dose escalation of target...Principles and Clinical Applications of IPSA ; Nucletron International Physics Seminar, Vaals, Netherlands, Sept 13-16, 2006. 7 IPSA ...experience with IPSA for prostate cancer treatment in HDR Brachytherapy, 4ième séminaire francophone de curiethérapie, Arcachon, France, June 15, 2006
2008-06-01
brachytherapy treatment planning has been demonstrated. Using the inverse planning program IPSA , dose escalation of target regions with a higher tumor...algorithm (called IPSA ) was used to generate dose distributions for five different levels of DIL- boost, at least 110%, 120%, 130%, 140% and 150...and LDR, VI Last Generation Radiotherapy Course, São Paulo, Brazil, Oct. 19, 2006. Principles and Clinical Applications of IPSA ; Nucletron
Operator-assisted planning and execution of proximity operations subject to operational constraints
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.; Ellis, Stephen R.
1991-01-01
Future multi-vehicle operations will involve multiple scenarios that will require a planning tool for the rapid, interactive creation of fuel-efficient trajectories. The planning process must deal with higher-order, non-linear processes involving dynamics that are often counter-intuitive. The optimization of resulting trajectories can be difficult to envision. An interaction proximity operations planning system is being developed to provide the operator with easily interpreted visual feedback of trajectories and constraints. This system is hosted on an IRIS 4D graphics platform and utilizes the Clohessy-Wiltshire equations. An inverse dynamics algorithm is used to remove non-linearities while the trajectory maneuvers are decoupled and separated in a geometric spreadsheet. The operator has direct control of the position and time of trajectory waypoints to achieve the desired end conditions. Graphics provide the operator with visualization of satisfying operational constraints such as structural clearance, plume impingement, approach velocity limits, and arrival or departure corridors. Primer vector theory is combined with graphical presentation to improve operator understanding of suggested automated system solutions and to allow the operator to review, edit, or provide corrective action to the trajectory plan.
Inverse Optimization: A New Perspective on the Black-Litterman Model.
Bertsimas, Dimitris; Gupta, Vishal; Paschalidis, Ioannis Ch
2012-12-11
The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct "BL"-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new "BL"-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views.
A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Liu, Shuang; Hu, Xiangyun; Liu, Tianyou
2014-07-01
Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.
Stress estimation in reservoirs using an integrated inverse method
NASA Astrophysics Data System (ADS)
Mazuyer, Antoine; Cupillard, Paul; Giot, Richard; Conin, Marianne; Leroy, Yves; Thore, Pierre
2018-05-01
Estimating the stress in reservoirs and their surroundings prior to the production is a key issue for reservoir management planning. In this study, we propose an integrated inverse method to estimate such initial stress state. The 3D stress state is constructed with the displacement-based finite element method assuming linear isotropic elasticity and small perturbations in the current geometry of the geological structures. The Neumann boundary conditions are defined as piecewise linear functions of depth. The discontinuous functions are determined with the CMA-ES (Covariance Matrix Adaptation Evolution Strategy) optimization algorithm to fit wellbore stress data deduced from leak-off tests and breakouts. The disregard of the geological history and the simplified rheological assumptions mean that only the stress field, statically admissible and matching the wellbore data should be exploited. The spatial domain of validity of this statement is assessed by comparing the stress estimations for a synthetic folded structure of finite amplitude with a history constructed assuming a viscous response.
Application of Carbonate Reservoir using waveform inversion and reverse-time migration methods
NASA Astrophysics Data System (ADS)
Kim, W.; Kim, H.; Min, D.; Keehm, Y.
2011-12-01
Recent exploration targets of oil and gas resources are deeper and more complicated subsurface structures, and carbonate reservoirs have become one of the attractive and challenging targets in seismic exploration. To increase the rate of success in oil and gas exploration, it is required to delineate detailed subsurface structures. Accordingly, migration method is more important factor in seismic data processing for the delineation. Seismic migration method has a long history, and there have been developed lots of migration techniques. Among them, reverse-time migration is promising, because it can provide reliable images for the complicated model even in the case of significant velocity contrasts in the model. The reliability of seismic migration images is dependent on the subsurface velocity models, which can be extracted in several ways. These days, geophysicists try to obtain velocity models through seismic full waveform inversion. Since Lailly (1983) and Tarantola (1984) proposed that the adjoint state of wave equations can be used in waveform inversion, the back-propagation techniques used in reverse-time migration have been used in waveform inversion, which accelerated the development of waveform inversion. In this study, we applied acoustic waveform inversion and reverse-time migration methods to carbonate reservoir models with various reservoir thicknesses to examine the feasibility of the methods in delineating carbonate reservoir models. We first extracted subsurface material properties from acoustic waveform inversion, and then applied reverse-time migration using the inverted velocities as a background model. The waveform inversion in this study used back-propagation technique, and conjugate gradient method was used in optimization. The inversion was performed using the frequency-selection strategy. Finally waveform inversion results showed that carbonate reservoir models are clearly inverted by waveform inversion and migration images based on the inversion results are quite reliable. Different thicknesses of reservoir models were also described and the results revealed that the lower boundary of the reservoir was not delineated because of energy loss. From these results, it was noted that carbonate reservoirs can be properly imaged and interpreted by waveform inversion and reverse-time migration methods. This work was supported by the Energy Resources R&D program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2009201030001A, No. 2010T100200133) and the Brain Korea 21 project of Energy System Engineering.
Hori, Daisuke; Katsuragawa, Shigehiko; Murakami, Ryuuji; Hirai, Toshinori
2010-04-20
We propose a computerized method for semi-automated segmentation of the gross tumor volume (GTV) of a glioblastoma multiforme (GBM) on brain MR images for radiotherapy planning (RTP). Three-dimensional (3D) MR images of 28 cases with a GBM were used in this study. First, a sphere volume of interest (VOI) including the GBM was selected by clicking a part of the GBM region in the 3D image. Then, the sphere VOI was transformed to a two-dimensional (2D) image by use of a spiral-scanning technique. We employed active contour models (ACM) to delineate an optimal outline of the GBM in the transformed 2D image. After inverse transform of the optimal outline to the 3D space, a morphological filter was applied to smooth the shape of the 3D segmented region. For evaluation of our computerized method, we compared the computer output with manually segmented regions, which were obtained by a therapeutic radiologist using a manual tracking method. In evaluating our segmentation method, we employed the Jaccard similarity coefficient (JSC) and the true segmentation coefficient (TSC) in volumes between the computer output and the manually segmented region. The mean and standard deviation of JSC and TSC were 74.2+/-9.8% and 84.1+/-7.1%, respectively. Our segmentation method provided a relatively accurate outline for GBM and would be useful for radiotherapy planning.
NASA Astrophysics Data System (ADS)
Métivier, L.; Brossier, R.; Mérigot, Q.; Oudet, E.; Virieux, J.
2016-04-01
Full waveform inversion using the conventional L2 distance to measure the misfit between seismograms is known to suffer from cycle skipping. An alternative strategy is proposed in this study, based on a measure of the misfit computed with an optimal transport distance. This measure allows to account for the lateral coherency of events within the seismograms, instead of considering each seismic trace independently, as is done generally in full waveform inversion. The computation of this optimal transport distance relies on a particular mathematical formulation allowing for the non-conservation of the total energy between seismograms. The numerical solution of the optimal transport problem is performed using proximal splitting techniques. Three synthetic case studies are investigated using this strategy: the Marmousi 2 model, the BP 2004 salt model, and the Chevron 2014 benchmark data. The results emphasize interesting properties of the optimal transport distance. The associated misfit function is less prone to cycle skipping. A workflow is designed to reconstruct accurately the salt structures in the BP 2004 model, starting from an initial model containing no information about these structures. A high-resolution P-wave velocity estimation is built from the Chevron 2014 benchmark data, following a frequency continuation strategy. This estimation explains accurately the data. Using the same workflow, full waveform inversion based on the L2 distance converges towards a local minimum. These results yield encouraging perspectives regarding the use of the optimal transport distance for full waveform inversion: the sensitivity to the accuracy of the initial model is reduced, the reconstruction of complex salt structure is made possible, the method is robust to noise, and the interpretation of seismic data dominated by reflections is enhanced.
3D gravity inversion and uncertainty assessment of basement relief via Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Pallero, J. L. G.; Fernández-Martínez, J. L.; Bonvalot, S.; Fudym, O.
2017-04-01
Nonlinear gravity inversion in sedimentary basins is a classical problem in applied geophysics. Although a 2D approximation is widely used, 3D models have been also proposed to better take into account the basin geometry. A common nonlinear approach to this 3D problem consists in modeling the basin as a set of right rectangular prisms with prescribed density contrast, whose depths are the unknowns. Then, the problem is iteratively solved via local optimization techniques from an initial model computed using some simplifications or being estimated using prior geophysical models. Nevertheless, this kind of approach is highly dependent on the prior information that is used, and lacks from a correct solution appraisal (nonlinear uncertainty analysis). In this paper, we use the family of global Particle Swarm Optimization (PSO) optimizers for the 3D gravity inversion and model appraisal of the solution that is adopted for basement relief estimation in sedimentary basins. Synthetic and real cases are illustrated, showing that robust results are obtained. Therefore, PSO seems to be a very good alternative for 3D gravity inversion and uncertainty assessment of basement relief when used in a sampling while optimizing approach. That way important geological questions can be answered probabilistically in order to perform risk assessment in the decisions that are made.
Sodium inversion recovery MRI on the knee joint at 7 T with an optimal control pulse.
Lee, Jae-Seung; Xia, Ding; Madelin, Guillaume; Regatte, Ravinder R
2016-01-01
In the field of sodium magnetic resonance imaging (MRI), inversion recovery (IR) is a convenient and popular method to select sodium in different environments. For the knee joint, IR has been used to suppress the signal from synovial fluids, which improves the correlation between the sodium signal and the concentration of glycosaminoglycans (GAGs) in cartilage tissues. For the better inversion of the magnetization vector under the spatial variations of the B0 and B1 fields, the IR sequence usually employ adiabatic pulses as the inversion pulse. On the other hand, it has been shown that RF shapes robust against the variations of the B0 and B1 fields can be generated by numerical optimization based on optimal control theory. In this work, we compare the performance of fluid-suppressed sodium MRI on the knee joint in vivo, between one implemented with an adiabatic pulse in the IR sequence and the other with the adiabatic pulse replaced by an optimal-control shaped pulse. While the optimal-control pulse reduces the RF power deposited to the body by 58%, the quality of fluid suppression and the signal level of sodium within cartilage are similar between two implementations. Copyright © 2015 Elsevier Inc. All rights reserved.
Family Health Histories and Their Impact on Retirement Confidence.
Zick, Cathleen D; Mayer, Robert N; Smith, Ken R
2015-08-01
Retirement confidence is a key social barometer. In this article, we examine how personal and parental health histories relate to working-age adults' feelings of optimism or pessimism about their overall retirement prospects. This study links survey data on retirement planning with information on respondents' own health histories and those of their parents. The multivariate models control for the respondents' socio-demographic and economic characteristics along with past retirement planning activities when estimating the relationships between family health histories and retirement confidence. Retirement confidence is inversely related to parental history of cancer and cardiovascular disease but not to personal health history. In contrast, retirement confidence is positively associated with both parents being deceased. As members of the public become increasingly aware of how genetics and other family factors affect intergenerational transmission of chronic diseases, it is likely that the link between family health histories and retirement confidence will intensify. © The Author(s) 2015.
Optimization of Craniospinal Irradiation for Pediatric Medulloblastoma Using VMAT and IMRT.
Al-Wassia, Rolina K; Ghassal, Noor M; Naga, Adly; Awad, Nesreen A; Bahadur, Yasir A; Constantinescu, Camelia
2015-10-01
Intensity-modulated radiotherapy (IMRT) and volumetric-modulated arc therapy (VMAT) provide highly conformal target radiation doses, but also expose large volumes of healthy tissue to low-dose radiation. With improving survival, more children with medulloblastoma (MB) are at risk of late adverse effects of radiotherapy, including secondary cancers. We evaluated the characteristics of IMRT and VMAT craniospinal irradiation treatment plans in children with standard-risk MB to compare radiation dose delivery to target organs and organs at risk (OAR). Each of 10 children with standard-risk MB underwent both IMRT and VMAT treatment planning. Dose calculations used inverse planning optimization with a craniospinal dose of 23.4 Gy followed by a posterior fossa boost to 55.8 Gy. Clinical and planning target volumes were demarcated on axial computed tomography images. Dose distributions to target organs and OAR for each planning technique were measured and compared with published dose-volume toxicity data for pediatric patients. All patients completed treatment planning for both techniques. Analyses and comparisons of dose distributions and dose-volume histograms for the planned target volumes, and dose delivery to the OAR for each technique demonstrated the following: (1) VMAT had a modest, but significantly better, planning target volume-dose coverage and homogeneity compared with IMRT; (2) there were different OAR dose-sparing profiles for IMRT versus VMAT; and (3) neither IMRT nor VMAT demonstrated dose reductions to the published pediatric dose limits for the eyes, the lens, the cochlea, the pituitary, and the brain. The use of both IMRT and VMAT provides good target tissue coverage and sparing of the adjacent tissue for MB. Both techniques resulted in OAR dose delivery within published pediatric dose guidelines, except those mentioned above. Pediatric patients with standard-risk MB remain at risk for late endocrinologic, sensory (auditory and visual), and brain functional impairments.
Haworth, Annette; Mears, Christopher; Betts, John M; Reynolds, Hayley M; Tack, Guido; Leo, Kevin; Williams, Scott; Ebert, Martin A
2016-01-07
Treatment plans for ten patients, initially treated with a conventional approach to low dose-rate brachytherapy (LDR, 145 Gy to entire prostate), were compared with plans for the same patients created with an inverse-optimisation planning process utilising a biologically-based objective. The 'biological optimisation' considered a non-uniform distribution of tumour cell density through the prostate based on known and expected locations of the tumour. Using dose planning-objectives derived from our previous biological-model validation study, the volume of the urethra receiving 125% of the conventional prescription (145 Gy) was reduced from a median value of 64% to less than 8% whilst maintaining high values of TCP. On average, the number of planned seeds was reduced from 85 to less than 75. The robustness of plans to random seed displacements needs to be carefully considered when using contemporary seed placement techniques. We conclude that an inverse planning approach to LDR treatments, based on a biological objective, has the potential to maintain high rates of tumour control whilst minimising dose to healthy tissue. In future, the radiobiological model will be informed using multi-parametric MRI to provide a personalised medicine approach.
NASA Astrophysics Data System (ADS)
Haworth, Annette; Mears, Christopher; Betts, John M.; Reynolds, Hayley M.; Tack, Guido; Leo, Kevin; Williams, Scott; Ebert, Martin A.
2016-01-01
Treatment plans for ten patients, initially treated with a conventional approach to low dose-rate brachytherapy (LDR, 145 Gy to entire prostate), were compared with plans for the same patients created with an inverse-optimisation planning process utilising a biologically-based objective. The ‘biological optimisation’ considered a non-uniform distribution of tumour cell density through the prostate based on known and expected locations of the tumour. Using dose planning-objectives derived from our previous biological-model validation study, the volume of the urethra receiving 125% of the conventional prescription (145 Gy) was reduced from a median value of 64% to less than 8% whilst maintaining high values of TCP. On average, the number of planned seeds was reduced from 85 to less than 75. The robustness of plans to random seed displacements needs to be carefully considered when using contemporary seed placement techniques. We conclude that an inverse planning approach to LDR treatments, based on a biological objective, has the potential to maintain high rates of tumour control whilst minimising dose to healthy tissue. In future, the radiobiological model will be informed using multi-parametric MRI to provide a personalised medicine approach.
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang; Lamara, Samir; Gutt, Phillip; Paffrath, Marcel
2015-04-01
We present a seismic full waveform inversion concept for applications ranging from seismological to enineering contexts, based on sensitivity kernels for full waveforms. The kernels are derived from Born scattering theory as the Fréchet derivatives of linearized frequency-domain full waveform data functionals, quantifying the influence of elastic earth model parameters and density on the data values. For a specific source-receiver combination, the kernel is computed from the displacement and strain field spectrum originating from the source evaluated throughout the inversion domain, as well as the Green function spectrum and its strains originating from the receiver. By storing the wavefield spectra of specific sources/receivers, they can be re-used for kernel computation for different specific source-receiver combinations, optimizing the total number of required forward simulations. In the iterative inversion procedure, the solution of the forward problem, the computation of sensitivity kernels and the derivation of a model update is held completely separate. In particular, the model description for the forward problem and the description of the inverted model update are kept independent. Hence, the resolution of the inverted model as well as the complexity of solving the forward problem can be iteratively increased (with increasing frequency content of the inverted data subset). This may regularize the overall inverse problem and optimizes the computational effort of both, solving the forward problem and computing the model update. The required interconnection of arbitrary unstructured volume and point grids is realized by generalized high-order integration rules and 3D-unstructured interpolation methods. The model update is inferred solving a minimization problem in a least-squares sense, resulting in Gauss-Newton convergence of the overall inversion process. The inversion method was implemented in the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion), which provides a generalized interface to arbitrary external forward modelling codes. So far, the 3D spectral-element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework are supported. The creation of interfaces to further forward codes is planned in the near future. ASKI is freely available under the terms of the GPL at www.rub.de/aski . Since the independent modules of ASKI must communicate via file output/input, large storage capacities need to be accessible conveniently. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion. In the presentation, we will show some aspects of the theory behind the full waveform inversion method and its practical realization by the software package ASKI, as well as synthetic and real-data applications from different scales and geometries.
2016-02-02
Earths ”, MS&T15-Materials Science and Technology 2015 Conference, Columbus, Ohio, October 4-8, 2015. 3. Dulikrvich, G.S., Reddy, S., Orlande, H.R.B...Schwartz, J.and Koch, C.C., “Multi-Objective Design and Optimization of Hard Magnetic Alloys Free of Rare Earths ”, MS&T15-Materials Science and Technology...AFRL-AFOSR-VA-TR-2016-0091 (BRI) Direct and Inverse Design Optimization of Magnetic Alloys with Minimized Use of Rare Earth Elements George
Adjoint Sensitivity Method to Determine Optimal Set of Stations for Tsunami Source Inversion
NASA Astrophysics Data System (ADS)
Gusman, A. R.; Hossen, M. J.; Cummins, P. R.; Satake, K.
2017-12-01
We applied the adjoint sensitivity technique in tsunami science for the first time to determine an optimal set of stations for a tsunami source inversion. The adjoint sensitivity (AS) method has been used in numerical weather prediction to find optimal locations for adaptive observations. We implemented this technique to Green's Function based Time Reverse Imaging (GFTRI), which is recently used in tsunami source inversion in order to reconstruct the initial sea surface displacement, known as tsunami source model. This method has the same source representation as the traditional least square (LSQ) source inversion method where a tsunami source is represented by dividing the source region into a regular grid of "point" sources. For each of these, Green's function (GF) is computed using a basis function for initial sea surface displacement whose amplitude is concentrated near the grid point. We applied the AS method to the 2009 Samoa earthquake tsunami that occurred on 29 September 2009 in the southwest Pacific, near the Tonga trench. Many studies show that this earthquake is a doublet associated with both normal faulting in the outer-rise region and thrust faulting in the subduction interface. To estimate the tsunami source model for this complex event, we initially considered 11 observations consisting of 5 tide gauges and 6 DART bouys. After implementing AS method, we found the optimal set of observations consisting with 8 stations. Inversion with this optimal set provides better result in terms of waveform fitting and source model that shows both sub-events associated with normal and thrust faulting.
NASA Astrophysics Data System (ADS)
Schröder, Markus; Brown, Alex
2009-10-01
We present a modified version of a previously published algorithm (Gollub et al 2008 Phys. Rev. Lett.101 073002) for obtaining an optimized laser field with more general restrictions on the search space of the optimal field. The modification leads to enforcement of the constraints on the optimal field while maintaining good convergence behaviour in most cases. We demonstrate the general applicability of the algorithm by imposing constraints on the temporal symmetry of the optimal fields. The temporal symmetry is used to reduce the number of transitions that have to be optimized for quantum gate operations that involve inversion (NOT gate) or partial inversion (Hadamard gate) of the qubits in a three-dimensional model of ammonia.
Mini-batch optimized full waveform inversion with geological constrained gradient filtering
NASA Astrophysics Data System (ADS)
Yang, Hui; Jia, Junxiong; Wu, Bangyu; Gao, Jinghuai
2018-05-01
High computation cost and generating solutions without geological sense have hindered the wide application of Full Waveform Inversion (FWI). Source encoding technique is a way to dramatically reduce the cost of FWI but subject to fix-spread acquisition setup requirement and slow convergence for the suppression of cross-talk. Traditionally, gradient regularization or preconditioning is applied to mitigate the ill-posedness. An isotropic smoothing filter applied on gradients generally gives non-geological inversion results, and could also introduce artifacts. In this work, we propose to address both the efficiency and ill-posedness of FWI by a geological constrained mini-batch gradient optimization method. The mini-batch gradient descent optimization is adopted to reduce the computation time by choosing a subset of entire shots for each iteration. By jointly applying the structure-oriented smoothing to the mini-batch gradient, the inversion converges faster and gives results with more geological meaning. Stylized Marmousi model is used to show the performance of the proposed method on realistic synthetic model.
A Forward Glimpse into Inverse Problems through a Geology Example
ERIC Educational Resources Information Center
Winkel, Brian J.
2012-01-01
This paper describes a forward approach to an inverse problem related to detecting the nature of geological substrata which makes use of optimization techniques in a multivariable calculus setting. The true nature of the related inverse problem is highlighted. (Contains 2 figures.)
Inverse Optimization: A New Perspective on the Black-Litterman Model
Bertsimas, Dimitris; Gupta, Vishal; Paschalidis, Ioannis Ch.
2014-01-01
The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct “BL”-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new “BL”-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views. PMID:25382873
An algorithm for deriving core magnetic field models from the Swarm data set
NASA Astrophysics Data System (ADS)
Rother, Martin; Lesur, Vincent; Schachtschneider, Reyko
2013-11-01
In view of an optimal exploitation of the Swarm data set, we have prepared and tested software dedicated to the determination of accurate core magnetic field models and of the Euler angles between the magnetic sensors and the satellite reference frame. The dedicated core field model estimation is derived directly from the GFZ Reference Internal Magnetic Model (GRIMM) inversion and modeling family. The data selection techniques and the model parameterizations are similar to what were used for the derivation of the second (Lesur et al., 2010) and third versions of GRIMM, although the usage of observatory data is not planned in the framework of the application to Swarm. The regularization technique applied during the inversion process smoothes the magnetic field model in time. The algorithm to estimate the Euler angles is also derived from the CHAMP studies. The inversion scheme includes Euler angle determination with a quaternion representation for describing the rotations. It has been built to handle possible weak time variations of these angles. The modeling approach and software have been initially validated on a simple, noise-free, synthetic data set and on CHAMP vector magnetic field measurements. We present results of test runs applied to the synthetic Swarm test data set.
Seismic waveform inversion best practices: regional, global and exploration test cases
NASA Astrophysics Data System (ADS)
Modrak, Ryan; Tromp, Jeroen
2016-09-01
Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence associated with strong nonlinearity, one or two test cases are not enough to reliably inform such decisions. We identify best practices, instead, using four seismic near-surface problems, one regional problem and two global problems. To make meaningful quantitative comparisons between methods, we carry out hundreds of inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that limited-memory BFGS provides computational savings over nonlinear conjugate gradient methods in a wide range of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization and total variation regularization are effective in different contexts. Besides questions of one strategy or another, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details involving the line search and restart conditions have a strong effect on computational cost, regardless of the chosen nonlinear optimization algorithm.
Liauh, Chihng-Tsung; Shih, Tzu-Ching; Huang, Huang-Wen; Lin, Win-Li
2004-02-01
An inverse algorithm with Tikhonov regularization of order zero has been used to estimate the intensity ratios of the reflected longitudinal wave to the incident longitudinal wave and that of the refracted shear wave to the total transmitted wave into bone in calculating the absorbed power field and then to reconstruct the temperature distribution in muscle and bone regions based on a limited number of temperature measurements during simulated ultrasound hyperthermia. The effects of the number of temperature sensors are investigated, as is the amount of noise superimposed on the temperature measurements, and the effects of the optimal sensor location on the performance of the inverse algorithm. Results show that noisy input data degrades the performance of this inverse algorithm, especially when the number of temperature sensors is small. Results are also presented demonstrating an improvement in the accuracy of the temperature estimates by employing an optimal value of the regularization parameter. Based on the analysis of singular-value decomposition, the optimal sensor position in a case utilizing only one temperature sensor can be determined to make the inverse algorithm converge to the true solution.
Method and Apparatus for Performance Optimization Through Physical Perturbation of Task Elements
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III (Inventor); Pope, Alan T. (Inventor); Palsson, Olafur S. (Inventor); Turner, Marsha J. (Inventor)
2016-01-01
The invention is an apparatus and method of biofeedback training for attaining a physiological state optimally consistent with the successful performance of a task, wherein the probability of successfully completing the task is made is inversely proportional to a physiological difference value, computed as the absolute value of the difference between at least one physiological signal optimally consistent with the successful performance of the task and at least one corresponding measured physiological signal of a trainee performing the task. The probability of successfully completing the task is made inversely proportional to the physiological difference value by making one or more measurable physical attributes of the environment in which the task is performed, and upon which completion of the task depends, vary in inverse proportion to the physiological difference value.
Vu, Van Hoan; Isableu, Brice; Berret, Bastien
2016-07-22
The purpose of this study was to investigate the nature of the variables and rules underlying the planning of unrestrained 3D arm reaching. To identify whether the brain uses kinematic, dynamic and energetic values in an isolated manner or combines them in a flexible way, we examined the effects of speed variations upon the chosen arm trajectories during free arm movements. Within the optimal control framework, we uncovered which (possibly composite) optimality criterion underlays at best the empirical data. Fifteen participants were asked to perform free-endpoint reaching movements from a specific arm configuration at slow, normal and fast speeds. Experimental results revealed that prominent features of observed motor behaviors were significantly speed-dependent, such as the chosen reach endpoint and the final arm posture. Nevertheless, participants exhibited different arm trajectories and various degrees of speed dependence of their reaching behavior. These inter-individual differences were addressed using a numerical inverse optimal control methodology. Simulation results revealed that a weighted combination of kinematic, energetic and dynamic cost functions was required to account for all the critical features of the participants' behavior. Furthermore, no evidence for the existence of a speed-dependent tuning of these weights was found, thereby suggesting subject-specific but speed-invariant weightings of kinematic, energetic and dynamic variables during the motor planning process of free arm movements. This suggested that the inter-individual difference of arm trajectories and speed dependence was not only due to anthropometric singularities but also to critical differences in the composition of the subjective cost function. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Adapting radiotherapy to hypoxic tumours
NASA Astrophysics Data System (ADS)
Malinen, Eirik; Søvik, Åste; Hristov, Dimitre; Bruland, Øyvind S.; Rune Olsen, Dag
2006-10-01
In the current work, the concepts of biologically adapted radiotherapy of hypoxic tumours in a framework encompassing functional tumour imaging, tumour control predictions, inverse treatment planning and intensity modulated radiotherapy (IMRT) were presented. Dynamic contrast enhanced magnetic resonance imaging (DCEMRI) of a spontaneous sarcoma in the nasal region of a dog was employed. The tracer concentration in the tumour was assumed related to the oxygen tension and compared to Eppendorf histograph measurements. Based on the pO2-related images derived from the MR analysis, the tumour was divided into four compartments by a segmentation procedure. DICOM structure sets for IMRT planning could be derived thereof. In order to display the possible advantages of non-uniform tumour doses, dose redistribution among the four tumour compartments was introduced. The dose redistribution was constrained by keeping the average dose to the tumour equal to a conventional target dose. The compartmental doses yielding optimum tumour control probability (TCP) were used as input in an inverse planning system, where the planning basis was the pO2-related tumour images from the MR analysis. Uniform (conventional) and non-uniform IMRT plans were scored both physically and biologically. The consequences of random and systematic errors in the compartmental images were evaluated. The normalized frequency distributions of the tracer concentration and the pO2 Eppendorf measurements were not significantly different. 28% of the tumour had, according to the MR analysis, pO2 values of less than 5 mm Hg. The optimum TCP following a non-uniform dose prescription was about four times higher than that following a uniform dose prescription. The non-uniform IMRT dose distribution resulting from the inverse planning gave a three times higher TCP than that of the uniform distribution. The TCP and the dose-based plan quality depended on IMRT parameters defined in the inverse planning procedure (fields and step-and-shoot intensity levels). Simulated random and systematic errors in the pO2-related images reduced the TCP for the non-uniform dose prescription. In conclusion, improved tumour control of hypoxic tumours by dose redistribution may be expected following hypoxia imaging, tumour control predictions, inverse treatment planning and IMRT.
Wang, Dafang; Kirby, Robert M.; MacLeod, Rob S.; Johnson, Chris R.
2013-01-01
With the goal of non-invasively localizing cardiac ischemic disease using body-surface potential recordings, we attempted to reconstruct the transmembrane potential (TMP) throughout the myocardium with the bidomain heart model. The task is an inverse source problem governed by partial differential equations (PDE). Our main contribution is solving the inverse problem within a PDE-constrained optimization framework that enables various physically-based constraints in both equality and inequality forms. We formulated the optimality conditions rigorously in the continuum before deriving finite element discretization, thereby making the optimization independent of discretization choice. Such a formulation was derived for the L2-norm Tikhonov regularization and the total variation minimization. The subsequent numerical optimization was fulfilled by a primal-dual interior-point method tailored to our problem’s specific structure. Our simulations used realistic, fiber-included heart models consisting of up to 18,000 nodes, much finer than any inverse models previously reported. With synthetic ischemia data we localized ischemic regions with roughly a 10% false-negative rate or a 20% false-positive rate under conditions up to 5% input noise. With ischemia data measured from animal experiments, we reconstructed TMPs with roughly 0.9 correlation with the ground truth. While precisely estimating the TMP in general cases remains an open problem, our study shows the feasibility of reconstructing TMP during the ST interval as a means of ischemia localization. PMID:23913980
Han, Eun Young; Paudel, Nava; Sung, Jiwon; Yoon, Myonggeun; Chung, Weon Kuu; Kim, Dong Wook
2016-04-19
The risk of secondary cancer from radiation treatment remains a concern for long-term breast cancer survivors, especially those treated with radiation at the age younger than 45 years. Treatment modalities optimally maximize the dose delivery to the tumor while minimizing radiation doses to neighboring organs, which can lead to secondary cancers. A new TomoTherapy treatment machine, TomoHDATM, can treat an entire breast with two static but intensity-modulated beams in a slice-by-slice fashion. This feature could reduce scattered and leakage radiation doses. We compared the plan quality and lifetime attributable risk (LAR) of a second malignancy among five treatment modalities: three-dimensional conformal radiation therapy, field-in-field forward-planned intensity-modulated radiation therapy, inverse-planned intensity-modulated radiation therapy (IMRT), volumetric modulated arc therapy, and TomoDirect mode on the TomoHDA system. Ten breast cancer patients were selected for retrospective analysis. Organ equivalent doses, plan characteristics, and LARs were compared. Out-of-field organ doses were measured with radio-photoluminescence glass dosimeters. Although the IMRT plan provided overall better plan quality, including the lowest probability of pneumonitis, it caused the second highest LAR. The TomoTherapy plan provided plan quality comparable to the IMRT plan and posed the lowest total LAR to neighboring organs. Therefore, it can be a better treatment modality for younger patients who have a longer life expectancy.
Pareto joint inversion of 2D magnetotelluric and gravity data
NASA Astrophysics Data System (ADS)
Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek
2015-04-01
In this contribution, the first results of the "Innovative technology of petrophysical parameters estimation of geological media using joint inversion algorithms" project were described. At this stage of the development, Pareto joint inversion scheme for 2D MT and gravity data was used. Additionally, seismic data were provided to set some constrains for the inversion. Sharp Boundary Interface(SBI) approach and description model with set of polygons were used to limit the dimensionality of the solution space. The main engine was based on modified Particle Swarm Optimization(PSO). This algorithm was properly adapted to handle two or more target function at once. Additional algorithm was used to eliminate non- realistic solution proposals. Because PSO is a method of stochastic global optimization, it requires a lot of proposals to be evaluated to find a single Pareto solution and then compose a Pareto front. To optimize this stage parallel computing was used for both inversion engine and 2D MT forward solver. There are many advantages of proposed solution of joint inversion problems. First of all, Pareto scheme eliminates cumbersome rescaling of the target functions, that can highly affect the final solution. Secondly, the whole set of solution is created in one optimization run, providing a choice of the final solution. This choice can be based off qualitative data, that are usually very hard to be incorporated into the regular inversion schema. SBI parameterisation not only limits the problem of dimensionality, but also makes constraining of the solution easier. At this stage of work, decision to test the approach using MT and gravity data was made, because this combination is often used in practice. It is important to mention, that the general solution is not limited to this two methods and it is flexible enough to be used with more than two sources of data. Presented results were obtained for synthetic models, imitating real geological conditions, where interesting density distributions are relatively shallow and resistivity changes are related to deeper parts. This kind of conditions are well suited for joint inversion of MT and gravity data. In the next stage of the solution development of further code optimization and extensive tests for real data will be realized. Presented work was supported by Polish National Centre for Research and Development under the contract number POIG.01.04.00-12-279/13
Inverse optimal design of the radiant heating in materials processing and manufacturing
NASA Astrophysics Data System (ADS)
Fedorov, A. G.; Lee, K. H.; Viskanta, R.
1998-12-01
Combined convective, conductive, and radiative heat transfer is analyzed during heating of a continuously moving load in the industrial radiant oven. A transient, quasi-three-dimensional model of heat transfer between a continuous load of parts moving inside an oven on a conveyor belt at a constant speed and an array of radiant heaters/burners placed inside the furnace enclosure is developed. The model accounts for radiative exchange between the heaters and the load, heat conduction in the load, and convective heat transfer between the moving load and oven environment. The thermal model developed has been used to construct a general framework for an inverse optimal design of an industrial oven as an example. In particular, the procedure based on the Levenberg-Marquardt nonlinear least squares optimization algorithm has been developed to obtain the optimal temperatures of the heaters/burners that need to be specified to achieve a prescribed temperature distribution of the surface of a load. The results of calculations for several sample cases are reported to illustrate the capabilities of the procedure developed for the optimal inverse design of an industrial radiant oven.
An optimization method for the problems of thermal cloaking of material bodies
NASA Astrophysics Data System (ADS)
Alekseev, G. V.; Levin, V. A.
2016-11-01
Inverse heat-transfer problems related to constructing special thermal devices such as cloaking shells, thermal-illusion or thermal-camouflage devices, and heat-flux concentrators are studied. The heatdiffusion equation with a variable heat-conductivity coefficient is used as the initial heat-transfer model. An optimization method is used to reduce the above inverse problems to the respective control problem. The solvability of the above control problem is proved, an optimality system that describes necessary extremum conditions is derived, and a numerical algorithm for solving the control problem is proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Y; Tan, J; Jiang, S
Purpose: High dose rate (HDR) brachytherapy treatment planning is conventionally performed in a manual fashion. Yet it is highly desirable to perform computerized automated planning to improve treatment planning efficiency, eliminate human errors, and reduce plan quality variation. The goal of this research is to develop an automatic treatment planning tool for HDR brachytherapy with a cylinder applicator for vaginal cancer. Methods: After inserting the cylinder applicator into the patient, a CT scan was acquired and was loaded to an in-house developed treatment planning software. The cylinder applicator was automatically segmented using image-processing techniques. CTV was generated based on user-specifiedmore » treatment depth and length. Locations of relevant points (apex point, prescription point, and vaginal surface point), central applicator channel coordinates, and dwell positions were determined according to their geometric relations with the applicator. Dwell time was computed through an inverse optimization process. The planning information was written into DICOM-RT plan and structure files to transfer the automatically generated plan to a commercial treatment planning system for plan verification and delivery. Results: We have tested the system retrospectively in nine patients treated with vaginal cylinder applicator. These cases were selected with different treatment prescriptions, lengths, depths, and cylinder diameters to represent a large patient population. Our system was able to generate treatment plans for these cases with clinically acceptable quality. Computation time varied from 3–6 min. Conclusion: We have developed a system to perform automated treatment planning for HDR brachytherapy with a cylinder applicator. Such a novel system has greatly improved treatment planning efficiency and reduced plan quality variation. It also served as a testbed to demonstrate the feasibility of automatic HDR treatment planning for more complicated cases.« less
NASA Technical Reports Server (NTRS)
Dubovik, O; Herman, M.; Holdak, A.; Lapyonok, T.; Taure, D.; Deuze, J. L.; Ducos, F.; Sinyuk, A.
2011-01-01
The proposed development is an attempt to enhance aerosol retrieval by emphasizing statistical optimization in inversion of advanced satellite observations. This optimization concept improves retrieval accuracy relying on the knowledge of measurement error distribution. Efficient application of such optimization requires pronounced data redundancy (excess of the measurements number over number of unknowns) that is not common in satellite observations. The POLDER imager on board the PARASOL microsatellite registers spectral polarimetric characteristics of the reflected atmospheric radiation at up to 16 viewing directions over each observed pixel. The completeness of such observations is notably higher than for most currently operating passive satellite aerosol sensors. This provides an opportunity for profound utilization of statistical optimization principles in satellite data inversion. The proposed retrieval scheme is designed as statistically optimized multi-variable fitting of all available angular observations obtained by the POLDER sensor in the window spectral channels where absorption by gas is minimal. The total number of such observations by PARASOL always exceeds a hundred over each pixel and the statistical optimization concept promises to be efficient even if the algorithm retrieves several tens of aerosol parameters. Based on this idea, the proposed algorithm uses a large number of unknowns and is aimed at retrieval of extended set of parameters affecting measured radiation.
Genetic Algorithms Evolve Optimized Transforms for Signal Processing Applications
2005-04-01
coefficient sets describing inverse transforms and matched forward/ inverse transform pairs that consistently outperform wavelets for image compression and reconstruction applications under conditions subject to quantization error.
Trajectory optimization for the National Aerospace Plane
NASA Technical Reports Server (NTRS)
Lu, Ping
1992-01-01
The primary objective of this research is to develop an efficient and robust trajectory optimization tool for the optimal ascent problem of the National Aerospace Plane (NASP). This report is organized in the following order to summarize the complete work: Section two states the formulation and models of the trajectory optimization problem. An inverse dynamics approach to the problem is introduced in Section three. Optimal trajectories corresponding to various conditions and performance parameters are presented in Section four. A midcourse nonlinear feedback controller is developed in Section five. Section six demonstrates the performance of the inverse dynamics approach and midcourse controller during disturbances. Section seven discusses rocket assisted ascent which may be beneficial when orbital altitude is high. Finally, Section eight recommends areas of future research.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-17
... during the winter time, when frequent and persistent temperature inversions occur, were specifically... winds and strong temperature inversions. These meteorological conditions may trap emissions within the... show a very high frequency of surface temperature inversions in the winter. Due to the meteorology...
Comparison of optimal design methods in inverse problems
NASA Astrophysics Data System (ADS)
Banks, H. T.; Holm, K.; Kappel, F.
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).
Fractional Gaussian model in global optimization
NASA Astrophysics Data System (ADS)
Dimri, V. P.; Srivastava, R. P.
2009-12-01
Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.
Computational Methods for Identification, Optimization and Control of PDE Systems
2010-04-30
focused on the development of numerical methods and software specifically for the purpose of solving control, design, and optimization prob- lems where...that provide the foundations of simulation software must play an important role in any research of this type, the demands placed on numerical methods...y sus Aplicaciones , Ciudad de Cor- doba - Argentina, October 2007. 3. Inverse Problems in Deployable Space Structures, Fourth Conference on Inverse
The 2-D magnetotelluric inverse problem solved with optimization
NASA Astrophysics Data System (ADS)
van Beusekom, Ashley E.; Parker, Robert L.; Bank, Randolph E.; Gill, Philip E.; Constable, Steven
2011-02-01
The practical 2-D magnetotelluric inverse problem seeks to determine the shallow-Earth conductivity structure using finite and uncertain data collected on the ground surface. We present an approach based on using PLTMG (Piecewise Linear Triangular MultiGrid), a special-purpose code for optimization with second-order partial differential equation (PDE) constraints. At each frequency, the electromagnetic field and conductivity are treated as unknowns in an optimization problem in which the data misfit is minimized subject to constraints that include Maxwell's equations and the boundary conditions. Within this framework it is straightforward to accommodate upper and lower bounds or other conditions on the conductivity. In addition, as the underlying inverse problem is ill-posed, constraints may be used to apply various kinds of regularization. We discuss some of the advantages and difficulties associated with using PDE-constrained optimization as the basis for solving large-scale nonlinear geophysical inverse problems. Combined transverse electric and transverse magnetic complex admittances from the COPROD2 data are inverted. First, we invert penalizing size and roughness giving solutions that are similar to those found previously. In a second example, conventional regularization is replaced by a technique that imposes upper and lower bounds on the model. In both examples the data misfit is better than that obtained previously, without any increase in model complexity.
Trajectory planning and control of a 6 DOF manipulator with Stewart platform-based mechanism
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Antrazi, Sami
1990-01-01
The trajectory planning and control was studied of a robot manipulator that has 6 degrees of freedom and was designed based on the mechanism of the Stewart Platform. First the main components of the manipulator is described along with its operation. The solutions are briefly prescribed for the forward and inverse kinematics of the manipulator. After that, two trajectory planning schemes are developed using the manipulator inverse kinematics to track straight lines and circular paths. Finally experiments conducted to study the performance of the developed planning schemes in tracking a straight line and a circle are presented and discussed.
Significance of the model considering mixed grain-size for inverse analysis of turbidites
NASA Astrophysics Data System (ADS)
Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.
2016-12-01
A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The uniform grain-size model often reaches to local optimum condition that is significantly different from true solution. In conclusion, we propose a method of optimization based on the model considering mixed grain-size particles, and show its application to examples of turbidites in the Kiyosumi Formation, Boso Peninsula, Japan.
Space Objects Maneuvering Detection and Prediction via Inverse Reinforcement Learning
NASA Astrophysics Data System (ADS)
Linares, R.; Furfaro, R.
This paper determines the behavior of Space Objects (SOs) using inverse Reinforcement Learning (RL) to estimate the reward function that each SO is using for control. The approach discussed in this work can be used to analyze maneuvering of SOs from observational data. The inverse RL problem is solved using the Feature Matching approach. This approach determines the optimal reward function that a SO is using while maneuvering by assuming that the observed trajectories are optimal with respect to the SO's own reward function. This paper uses estimated orbital elements data to determine the behavior of SOs in a data-driven fashion.
Cooperative inversion of magnetotelluric and seismic data sets
NASA Astrophysics Data System (ADS)
Markovic, M.; Santos, F.
2012-04-01
Cooperative inversion of magnetotelluric and seismic data sets Milenko Markovic,Fernando Monteiro Santos IDL, Faculdade de Ciências da Universidade de Lisboa 1749-016 Lisboa Inversion of single geophysical data has well-known limitations due to the non-linearity of the fields and non-uniqueness of the model. There is growing need, both in academy and industry to use two or more different data sets and thus obtain subsurface property distribution. In our case ,we are dealing with magnetotelluric and seismic data sets. In our approach,we are developing algorithm based on fuzzy-c means clustering technique, for pattern recognition of geophysical data. Separate inversion is performed on every step, information exchanged for model integration. Interrelationships between parameters from different models is not required in analytical form. We are investigating how different number of clusters, affects zonation and spatial distribution of parameters. In our study optimization in fuzzy c-means clustering (for magnetotelluric and seismic data) is compared for two cases, firstly alternating optimization and then hybrid method (alternating optimization+ Quasi-Newton method). Acknowledgment: This work is supported by FCT Portugal
Solutions to inverse plume in a crosswind problem using a predictor - corrector method
NASA Astrophysics Data System (ADS)
Vanderveer, Joseph; Jaluria, Yogesh
2013-11-01
Investigation for minimalist solutions to the inverse convection problem of a plume in a crosswind has developed a predictor - corrector method. The inverse problem is to predict the strength and location of the plume with respect to a select few downstream sampling points. This is accomplished with the help of two numerical simulations of the domain at differing source strengths, allowing the generation of two inverse interpolation functions. These functions in turn are utilized by the predictor step to acquire the plume strength. Finally, the same interpolation functions with the corrections from the plume strength are used to solve for the plume location. Through optimization of the relative location of the sampling points, the minimum number of samples for accurate predictions is reduced to two for the plume strength and three for the plume location. After the optimization, the predictor-corrector method demonstrates global uniqueness of the inverse solution for all test cases. The solution error is less than 1% for both plume strength and plume location. The basic approach could be extended to other inverse convection transport problems, particularly those encountered in environmental flows.
NASA Astrophysics Data System (ADS)
Li, Lei; Yu, Long; Yang, Kecheng; Li, Wei; Li, Kai; Xia, Min
2018-04-01
The multiangle dynamic light scattering (MDLS) technique can better estimate particle size distributions (PSDs) than single-angle dynamic light scattering. However, determining the inversion range, angular weighting coefficients, and scattering angle combination is difficult but fundamental to the reconstruction for both unimodal and multimodal distributions. In this paper, we propose a self-adapting regularization method called the wavelet iterative recursion nonnegative Tikhonov-Phillips-Twomey (WIRNNT-PT) algorithm. This algorithm combines a wavelet multiscale strategy with an appropriate inversion method and could self-adaptively optimize several noteworthy issues containing the choices of the weighting coefficients, the inversion range and the optimal inversion method from two regularization algorithms for estimating the PSD from MDLS measurements. In addition, the angular dependence of the MDLS for estimating the PSDs of polymeric latexes is thoroughly analyzed. The dependence of the results on the number and range of measurement angles was analyzed in depth to identify the optimal scattering angle combination. Numerical simulations and experimental results for unimodal and multimodal distributions are presented to demonstrate both the validity of the WIRNNT-PT algorithm and the angular dependence of MDLS and show that the proposed algorithm with a six-angle analysis in the 30-130° range can be satisfactorily applied to retrieve PSDs from MDLS measurements.
NASA Technical Reports Server (NTRS)
Ostroff, Aaron J.
1998-01-01
This paper contains a study of two methods for use in a generic nonlinear simulation tool that could be used to determine achievable control dynamics and control power requirements while performing perfect tracking maneuvers over the entire flight envelope. The two methods are NDI (nonlinear dynamic inversion) and the SOFFT(Stochastic Optimal Feedforward and Feedback Technology) feedforward control structure. Equivalent discrete and continuous SOFFT feedforward controllers have been developed. These equivalent forms clearly show that the closed-loop plant model loop is a plant inversion and is the same as the NDI formulation. The main difference is that the NDI formulation has a closed-loop controller structure whereas SOFFT uses an open-loop command model. Continuous, discrete, and hybrid controller structures have been developed and integrated into the formulation. Linear simulation results show that seven different configurations all give essentially the same response, with the NDI hybrid being slightly different. The SOFFT controller gave better tracking performance compared to the NDI controller when a nonlinear saturation element was added. Future plans include evaluation using a nonlinear simulation.
Exact solution for an optimal impermeable parachute problem
NASA Astrophysics Data System (ADS)
Lupu, Mircea; Scheiber, Ernest
2002-10-01
In the paper there are solved direct and inverse boundary problems and analytical solutions are obtained for optimization problems in the case of some nonlinear integral operators. It is modeled the plane potential flow of an inviscid, incompressible and nonlimited fluid jet, witch encounters a symmetrical, curvilinear obstacle--the deflector of maximal drag. There are derived integral singular equations, for direct and inverse problems and the movement in the auxiliary canonical half-plane is obtained. Next, the optimization problem is solved in an analytical manner. The design of the optimal airfoil is performed and finally, numerical computations concerning the drag coefficient and other geometrical and aerodynamical parameters are carried out. This model corresponds to the Helmholtz impermeable parachute problem.
Acoustic and elastic waveform inversion best practices
NASA Astrophysics Data System (ADS)
Modrak, Ryan T.
Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence, one or two test cases are not enough to reliably inform such decisions. We identify best practices instead using two global, one regional and four near-surface acoustic test problems. To obtain meaningful quantitative comparisons, we carry out hundreds acoustic inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that L-BFGS provides computational savings over nonlinear conjugate gradient methods in a wide variety of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization, and total variation regularization are effective in different contexts. Besides these issues, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details have a strong effect on computational cost, regardless of the chosen material parameterization or nonlinear optimization algorithm. Building on the acoustic inversion results, we carry out elastic experiments with four test problems, three objective functions, and four material parameterizations. The choice of parameterization for isotropic elastic media is found to be more complicated than previous studies suggests, with "wavespeed-like'' parameters performing well with phase-based objective functions and Lame parameters performing well with amplitude-based objective functions. Reliability and efficiency can be even harder to achieve in transversely isotropic elastic inversions because rotation angle parameters describing fast-axis direction are difficult to recover. Using Voigt or Chen-Tromp parameters avoids the need to include rotation angles explicitly and provides an effective strategy for anisotropic inversion. The need for flexible and portable workflow management tools for seismic inversion also poses a major challenge. In a final chapter, the software used to the carry out the above experiments is described and instructions for reproducing experimental results are given.
Dosimetric equivalence of nonstandard HDR brachytherapy catheter patterns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cunha, J. A. M.; Hsu, I-C.; Pouliot, J.
2009-01-15
Purpose: To determine whether alternative high dose rate prostate brachytherapy catheter patterns can result in similar or improved dose distributions while providing better access and reducing trauma. Materials and Methods: Standard prostate cancer high dose rate brachytherapy uses a regular grid of parallel needle positions to guide the catheter insertion. This geometry does not easily allow the physician to avoid piercing the critical structures near the penile bulb nor does it provide position flexibility in the case of pubic arch interference. This study used CT datasets with 3 mm slice spacing from ten previously treated patients and digitized new cathetersmore » following three hypothetical catheter patterns: conical, bi-conical, and fireworks. The conical patterns were used to accommodate a robotic delivery using a single entry point. The bi-conical and fireworks patterns were specifically designed to avoid the critical structures near the penile bulb. For each catheter distribution, a plan was optimized with the inverse planning algorithm, IPSA, and compared with the plan used for treatment. Irrelevant of catheter geometry, a plan must fulfill the RTOG-0321 dose criteria for target dose coverage (V{sub 100}{sup Prostate}>90%) and organ-at-risk dose sparing (V{sub 75}{sup Bladder}<1 cc, V{sub 75}{sup Rectum}<1 cc, V{sub 125}{sup Urethra}<<1 cc). Results: The three nonstandard catheter patterns used 16 nonparallel, straight divergent catheters, with entry points in the perineum. Thirty plans from ten patients with prostate sizes ranging from 26 to 89 cc were optimized. All nonstandard patterns fulfilled the RTOG criteria when the clinical plan did. In some cases, the dose distribution was improved by better sparing the organs-at-risk. Conclusion: Alternative catheter patterns can provide the physician with additional ways to treat patients previously considered unsuited for brachytherapy treatment (pubic arch interference) and facilitate robotic guidance of catheter insertion. In addition, alternative catheter patterns may decrease toxicity by avoidance of the critical structures near the penile bulb while still fulfilling the RTOG criteria.« less
NASA Technical Reports Server (NTRS)
Kim, E.; Tedesco, M.; Reichle, R.; Choudhury, B.; Peters-Lidard C.; Foster, J.; Hall, D.; Riggs, G.
2006-01-01
Microwave-based retrievals of snow parameters from satellite observations have a long heritage and have so far been generated primarily by regression-based empirical "inversion" methods based on snapshots in time. Direct assimilation of microwave radiance into physical land surface models can be used to avoid errors associated with such retrieval/inversion methods, instead utilizing more straightforward forward models and temporal information. This approach has been used for years for atmospheric parameters by the operational weather forecasting community with great success. Recent developments in forward radiative transfer modeling, physical land surface modeling, and land data assimilation are converging to allow the assembly of an integrated framework for snow/cold lands modeling and radiance assimilation. The objective of the Goddard snow radiance assimilation project is to develop such a framework and explore its capabilities. The key elements of this framework include: a forward radiative transfer model (FRTM) for snow, a snowpack physical model, a land surface water/energy cycle model, and a data assimilation scheme. In fact, multiple models are available for each element enabling optimization to match the needs of a particular study. Together these form a modular and flexible framework for self-consistent, physically-based remote sensing and water/energy cycle studies. In this paper we will describe the elements and the integration plan. All modules will operate within the framework of the Land Information System (LIS), a land surface modeling framework with data assimilation capabilities running on a parallel-node computing cluster. Capabilities for assimilation of snow retrieval products are already under development for LIS. We will describe plans to add radiance-based assimilation capabilities. Plans for validation activities using field measurements will also be discussed.
Inversion of Robin coefficient by a spectral stochastic finite element approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin Bangti; Zou Jun
2008-03-01
This paper investigates a variational approach to the nonlinear stochastic inverse problem of probabilistically calibrating the Robin coefficient from boundary measurements for the steady-state heat conduction. The problem is formulated into an optimization problem, and mathematical properties relevant to its numerical computations are investigated. The spectral stochastic finite element method using polynomial chaos is utilized for the discretization of the optimization problem, and its convergence is analyzed. The nonlinear conjugate gradient method is derived for the optimization system. Numerical results for several two-dimensional problems are presented to illustrate the accuracy and efficiency of the stochastic finite element method.
Inversion layer MOS solar cells
NASA Technical Reports Server (NTRS)
Ho, Fat Duen
1986-01-01
Inversion layer (IL) Metal Oxide Semiconductor (MOS) solar cells were fabricated. The fabrication technique and problems are discussed. A plan for modeling IL cells is presented. Future work in this area is addressed.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-07
... river basin with spikes in localized PM 2.5 concentrations that coincide with temperature inversions... an inversion break, the monitor returns to a level comparable to, and sometimes less than, the... often occur within the Liberty-Clairton Area during periods of strong nocturnal inversions. When this...
NASA Astrophysics Data System (ADS)
Chembuly, V. V. M. J. Satish; Voruganti, Hari Kumar
2018-04-01
Hyper redundant manipulators have a large number of degrees of freedom (DOF) than the required to perform a given task. Additional DOF of manipulators provide the flexibility to work in highly cluttered environment and in constrained workspaces. Inverse kinematics (IK) of hyper-redundant manipulators is complicated due to large number of DOF and these manipulators have multiple IK solutions. The redundancy gives a choice of selecting best solution out of multiple solutions based on certain criteria such as obstacle avoidance, singularity avoidance, joint limit avoidance and joint torque minimization. This paper focuses on IK solution and redundancy resolution of hyper-redundant manipulator using classical optimization approach. Joint positions are computed by optimizing various criteria for a serial hyper redundant manipulators while traversing different paths in the workspace. Several cases are addressed using this scheme to obtain the inverse kinematic solution while optimizing the criteria like obstacle avoidance, joint limit avoidance.
Hydrologic Process-oriented Optimization of Electrical Resistivity Tomography
NASA Astrophysics Data System (ADS)
Hinnell, A.; Bechtold, M.; Ferre, T. A.; van der Kruk, J.
2010-12-01
Electrical resistivity tomography (ERT) is commonly used in hydrologic investigations. Advances in joint and coupled hydrogeophysical inversion have enhanced the quantitative use of ERT to construct and condition hydrologic models (i.e. identify hydrologic structure and estimate hydrologic parameters). However the selection of which electrical resistivity data to collect and use is often determined by a combination of data requirements for geophysical analysis, intuition on the part of the hydrogeophysicist and logistical constraints of the laboratory or field site. One of the advantages of coupled hydrogeophysical inversion is the direct link between the hydrologic model and the individual geophysical data used to condition the model. That is, there is no requirement to collect geophysical data suitable for independent geophysical inversion. The geophysical measurements collected can be optimized for estimation of hydrologic model parameters rather than to develop a geophysical model. Using a synthetic model of drip irrigation we evaluate the value of individual resistivity measurements to describe the soil hydraulic properties and then use this information to build a data set optimized for characterizing hydrologic processes. We then compare the information content in the optimized data set with the information content in a data set optimized using a Jacobian sensitivity analysis.
Full Waveform Inversion for Seismic Velocity And Anelastic Losses in Heterogeneous Structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Askan, A.; /Carnegie Mellon U.; Akcelik, V.
2009-04-30
We present a least-squares optimization method for solving the nonlinear full waveform inverse problem of determining the crustal velocity and intrinsic attenuation properties of sedimentary valleys in earthquake-prone regions. Given a known earthquake source and a set of seismograms generated by the source, the inverse problem is to reconstruct the anelastic properties of a heterogeneous medium with possibly discontinuous wave velocities. The inverse problem is formulated as a constrained optimization problem, where the constraints are the partial and ordinary differential equations governing the anelastic wave propagation from the source to the receivers in the time domain. This leads to amore » variational formulation in terms of the material model plus the state variables and their adjoints. We employ a wave propagation model in which the intrinsic energy-dissipating nature of the soil medium is modeled by a set of standard linear solids. The least-squares optimization approach to inverse wave propagation presents the well-known difficulties of ill posedness and multiple minima. To overcome ill posedness, we include a total variation regularization functional in the objective function, which annihilates highly oscillatory material property components while preserving discontinuities in the medium. To treat multiple minima, we use a multilevel algorithm that solves a sequence of subproblems on increasingly finer grids with increasingly higher frequency source components to remain within the basin of attraction of the global minimum. We illustrate the methodology with high-resolution inversions for two-dimensional sedimentary models of the San Fernando Valley, under SH-wave excitation. We perform inversions for both the seismic velocity and the intrinsic attenuation using synthetic waveforms at the observer locations as pseudoobserved data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pablant, N. A.; Bell, R. E.; Bitter, M.
2014-11-15
Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at the Large Helical Device. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy andmore » tomographic inversion, XICS can provide profile measurements of the local emissivity, temperature, and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modified Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example, geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less
Pablant, N. A.; Bell, R. E.; Bitter, M.; ...
2014-08-08
Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at LHD. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy and tomographic inversion, XICSmore » can provide pro file measurements of the local emissivity, temperature and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modifi ed Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less
Intensity-modulated radiation therapy: a review with a physics perspective.
Cho, Byungchul
2018-03-01
Intensity-modulated radiation therapy (IMRT) has been considered the most successful development in radiation oncology since the introduction of computed tomography into treatment planning that enabled three-dimensional conformal radiotherapy in 1980s. More than three decades have passed since the concept of inverse planning was first introduced in 1982, and IMRT has become the most important and common modality in radiation therapy. This review will present developments in inverse IMRT treatment planning and IMRT delivery using multileaf collimators, along with the associated key concepts. Other relevant issues and future perspectives are also presented.
NASA Astrophysics Data System (ADS)
Horesh, L.; Haber, E.
2009-09-01
The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.
Near constant-time optimal piecewise LDR to HDR inverse tone mapping
NASA Astrophysics Data System (ADS)
Chen, Qian; Su, Guan-Ming; Yin, Peng
2015-02-01
In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.
Comparative study of inversion methods of three-dimensional NMR and sensitivity to fluids
NASA Astrophysics Data System (ADS)
Tan, Maojin; Wang, Peng; Mao, Keyu
2014-04-01
Three-dimensional nuclear magnetic resonance (3D NMR) logging can simultaneously measure transverse relaxation time (T2), longitudinal relaxation time (T1), and diffusion coefficient (D). These parameters can be used to distinguish fluids in the porous reservoirs. For 3D NMR logging, the relaxation mechanism and mathematical model, Fredholm equation, are introduced, and the inversion methods including Singular Value Decomposition (SVD), Butler-Reeds-Dawson (BRD), and Global Inversion (GI) methods are studied in detail, respectively. During one simulation test, multi-echo CPMG sequence activation is designed firstly, echo trains of the ideal fluid models are synthesized, then an inversion algorithm is carried on these synthetic echo trains, and finally T2-T1-D map is built. Futhermore, SVD, BRD, and GI methods are respectively applied into a same fluid model, and the computing speed and inversion accuracy are compared and analyzed. When the optimal inversion method and matrix dimention are applied, the inversion results are in good aggreement with the supposed fluid model, which indicates that the inversion method of 3D NMR is applieable for fluid typing of oil and gas reservoirs. Additionally, the forward modeling and inversion tests are made in oil-water and gas-water models, respectively, the sensitivity to the fluids in different magnetic field gradients is also examined in detail. The effect of magnetic gradient on fluid typing in 3D NMR logging is stuied and the optimal manetic gradient is choosen.
Vector-model-supported approach in prostate plan optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Eva Sau Fan; Department of Health Technology and Informatics, The Hong Kong Polytechnic University; Wu, Vincent Wing Cheung
Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100more » previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration number without compromising the plan quality.« less
Šimůnek, Jirka; Nimmo, John R.
2005-01-01
A modified version of the Hydrus software package that can directly or inversely simulate water flow in a transient centrifugal field is presented. The inverse solver for parameter estimation of the soil hydraulic parameters is then applied to multirotation transient flow experiments in a centrifuge. Using time‐variable water contents measured at a sequence of several rotation speeds, soil hydraulic properties were successfully estimated by numerical inversion of transient experiments. The inverse method was then evaluated by comparing estimated soil hydraulic properties with those determined independently using an equilibrium analysis. The optimized soil hydraulic properties compared well with those determined using equilibrium analysis and steady state experiment. Multirotation experiments in a centrifuge not only offer significant time savings by accelerating time but also provide significantly more information for the parameter estimation procedure compared to multistep outflow experiments in a gravitational field.
NASA Astrophysics Data System (ADS)
Ren, Tao; Modest, Michael F.; Fateev, Alexander; Clausen, Sønnik
2015-01-01
In this study, we present an inverse calculation model based on the Levenberg-Marquardt optimization method to reconstruct temperature and species concentration from measured line-of-sight spectral transmissivity data for homogeneous gaseous media. The high temperature gas property database HITEMP 2010 (Rothman et al. (2010) [1]), which contains line-by-line (LBL) information for several combustion gas species, such as CO2 and H2O, was used to predict gas spectral transmissivities. The model was validated by retrieving temperatures and species concentrations from experimental CO2 and H2O transmissivity measurements. Optimal wavenumber ranges for CO2 and H2O transmissivity measured across a wide range of temperatures and concentrations were determined according to the performance of inverse calculations. Results indicate that the inverse radiation model shows good feasibility for measurements of temperature and gas concentration.
Deist, T M; Gorissen, B L
2016-02-07
High-dose-rate brachytherapy is a tumor treatment method where a highly radioactive source is brought in close proximity to the tumor. In this paper we develop a simulated annealing algorithm to optimize the dwell times at preselected dwell positions to maximize tumor coverage under dose-volume constraints on the organs at risk. Compared to existing algorithms, our algorithm has advantages in terms of speed and objective value and does not require an expensive general purpose solver. Its success mainly depends on exploiting the efficiency of matrix multiplication and a careful selection of the neighboring states. In this paper we outline its details and make an in-depth comparison with existing methods using real patient data.
Numerical methods for the inverse problem of density functional theory
Jensen, Daniel S.; Wasserman, Adam
2017-07-17
Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less
Numerical methods for the inverse problem of density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Daniel S.; Wasserman, Adam
Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less
Aguilar, I; Misztal, I; Legarra, A; Tsuruta, S
2011-12-01
Genomic evaluations can be calculated using a unified procedure that combines phenotypic, pedigree and genomic information. Implementation of such a procedure requires the inverse of the relationship matrix based on pedigree and genomic relationships. The objective of this study was to investigate efficient computing options to create relationship matrices based on genomic markers and pedigree information as well as their inverses. SNP maker information was simulated for a panel of 40 K SNPs, with the number of genotyped animals up to 30 000. Matrix multiplication in the computation of the genomic relationship was by a simple 'do' loop, by two optimized versions of the loop, and by a specific matrix multiplication subroutine. Inversion was by a generalized inverse algorithm and by a LAPACK subroutine. With the most efficient choices and parallel processing, creation of matrices for 30 000 animals would take a few hours. Matrices required to implement a unified approach can be computed efficiently. Optimizations can be either by modifications of existing code or by the use of efficient automatic optimizations provided by open source or third-party libraries. © 2011 Blackwell Verlag GmbH.
NASA Astrophysics Data System (ADS)
Marinoni, Marianna; Delay, Frederick; Ackerer, Philippe; Riva, Monica; Guadagnini, Alberto
2016-08-01
We investigate the effect of considering reciprocal drawdown curves for the characterization of hydraulic properties of aquifer systems through inverse modeling based on interference well testing. Reciprocity implies that drawdown observed in a well B when pumping takes place from well A should strictly coincide with the drawdown observed in A when pumping in B with the same flow rate as in A. In this context, a critical point related to applications of hydraulic tomography is the assessment of the number of available independent drawdown data and their impact on the solution of the inverse problem. The issue arises when inverse modeling relies upon mathematical formulations of the classical single-continuum approach to flow in porous media grounded on Darcy's law. In these cases, introducing reciprocal drawdown curves in the database of an inverse problem is equivalent to duplicate some information, to a certain extent. We present a theoretical analysis of the way a Least-Square objective function and a Levenberg-Marquardt minimization algorithm are affected by the introduction of reciprocal information in the inverse problem. We also investigate the way these reciprocal data, eventually corrupted by measurement errors, influence model parameter identification in terms of: (a) the convergence of the inverse model, (b) the optimal values of parameter estimates, and (c) the associated estimation uncertainty. Our theoretical findings are exemplified through a suite of computational examples focused on block-heterogeneous systems with increased complexity level. We find that the introduction of noisy reciprocal information in the objective function of the inverse problem has a very limited influence on the optimal parameter estimates. Convergence of the inverse problem improves when adding diverse (nonreciprocal) drawdown series, but does not improve when reciprocal information is added to condition the flow model. The uncertainty on optimal parameter estimates is influenced by the strength of measurement errors and it is not significantly diminished or increased by adding noisy reciprocal information.
A comprehensive comparison of IMRT and VMAT plan quality for prostate cancer treatment
QUAN, ENZHUO M.; LI, XIAOQIANG; LI, YUPENG; WANG, XIAOCHUN; KUDCHADKER, RAJAT J.; JOHNSON, JENNIFER L.; KUBAN, DEBORAH A.; LEE, ANDREW K.; ZHANG, XIAODONG
2013-01-01
Purpose We performed a comprehensive comparative study of the plan quality between volumetric modulated arc therapy (VMAT) and intensity-modulated radiation therapy (IMRT) for the treatment of prostate cancer. Methods and Materials Eleven patients with prostate cancer treated at our institution were randomly selected for this study. For each patient, a VMAT plan and a series of IMRT plans using an increasing number of beams (8, 12, 16, 20, and 24 beams) were examined. All plans were generated using our in-house-developed automatic inverse planning (AIP) algorithm. An existing 8-beam clinical IMRT plan, which was used to treat the patient, was used as the reference plan. For each patient, all AIP-generated plans were optimized to achieve the same level of planning target volume (PTV) coverage as the reference plan. Plan quality was evaluated by measuring mean dose to and dose-volume statistics of the organs-at-risk, especially the rectum, from each type of plan. Results For the same PTV coverage, the AIP-generated VMAT plans had significantly better plan quality in terms of rectum sparing than the 8-beam clinical and AIP-generated IMRT plans (p < 0.0001). However, the differences between the IMRT and VMAT plans in all the dosimetric indices decreased as the number of beams used in IMRT increased. IMRT plan quality was similar or superior to that of VMAT when the number of beams in IMRT was increased to a certain number, which ranged from 12 to 24 for the set of patients studied. The superior VMAT plan quality resulted in approximately 30% more monitor units than the 8-beam IMRT plans, but the delivery time was still less than 3 minutes. Conclusions Considering the superior plan quality as well as the delivery efficiency of VMAT compared with that of IMRT, VMAT may be the preferred modality for treating prostate cancer. PMID:22704703
Multi-modal and targeted imaging improves automated mid-brain segmentation
NASA Astrophysics Data System (ADS)
Plassard, Andrew J.; D'Haese, Pierre F.; Pallavaram, Srivatsan; Newton, Allen T.; Claassen, Daniel O.; Dawant, Benoit M.; Landman, Bennett A.
2017-02-01
The basal ganglia and limbic system, particularly the thalamus, putamen, internal and external globus pallidus, substantia nigra, and sub-thalamic nucleus, comprise a clinically relevant signal network for Parkinson's disease. In order to manually trace these structures, a combination of high-resolution and specialized sequences at 7T are used, but it is not feasible to scan clinical patients in those scanners. Targeted imaging sequences at 3T such as F-GATIR, and other optimized inversion recovery sequences, have been presented which enhance contrast in a select group of these structures. In this work, we show that a series of atlases generated at 7T can be used to accurately segment these structures at 3T using a combination of standard and optimized imaging sequences, though no one approach provided the best result across all structures. In the thalamus and putamen, a median Dice coefficient over 0.88 and a mean surface distance less than 1.0mm was achieved using a combination of T1 and an optimized inversion recovery imaging sequences. In the internal and external globus pallidus a Dice over 0.75 and a mean surface distance less than 1.2mm was achieved using a combination of T1 and FGATIR imaging sequences. In the substantia nigra and sub-thalamic nucleus a Dice coefficient of over 0.6 and a mean surface distance of less than 1.0mm was achieved using the optimized inversion recovery imaging sequence. On average, using T1 and optimized inversion recovery together produced significantly improved segmentation results than any individual modality (p<0.05 wilcox sign-rank test).
Inverse Modelling to Obtain Head Movement Controller Signal
NASA Technical Reports Server (NTRS)
Kim, W. S.; Lee, S. H.; Hannaford, B.; Stark, L.
1984-01-01
Experimentally obtained dynamics of time-optimal, horizontal head rotations have previously been simulated by a sixth order, nonlinear model driven by rectangular control signals. Electromyography (EMG) recordings have spects which differ in detail from the theoretical rectangular pulsed control signal. Control signals for time-optimal as well as sub-optimal horizontal head rotations were obtained by means of an inverse modelling procedures. With experimentally measured dynamical data serving as the input, this procedure inverts the model to produce the neurological control signals driving muscles and plant. The relationships between these controller signals, and EMG records should contribute to the understanding of the neurological control of movements.
Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.
Zaitsev, M; Steinhoff, S; Shah, N J
2003-06-01
A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.
Optimization of CO2 Surface Flux using GOSAT Total Column CO2: First Results for 2009-2010
NASA Astrophysics Data System (ADS)
Basu, S.; Houweling, S.
2011-12-01
Constraining surface flux estimates of CO2 using satellite measurements has been one of the long-standing goals of the atmospheric inverse modeling community. We present the first results of inverting GOSAT total column CO2 measurements for obtaining global monthly CO2 flux maps over one year (June 2009 to May 2010). We use the SRON RemoTeC retrieval of CO2 for our inversions. The SRON retrieval has been shown to have no bias when compared to TCCON total column measurements, and latitudinal gradients of the retrieved CO2 are consistent with gradients deduced from the surface flask network [Butz et al, 2011]. This makes this retrieval an ideal candidate for atmospheric inversions, which are highly sensitive to spurious gradients. Our inversion system is analogous to the CarbonTracker (CT) data assimilation system; it is initialized with the prior CO2 fluxes of CT, and uses the same atmospheric transport model, i.e., TM5. The two major differences are (a) we add GOSAT CO2 data to the inversion in addition to flask data, and (b) we use a 4DVAR optimization system instead of a Kalman filter. We compare inversions using (a) only GOSAT total column CO2 measurements, (b) only surface flask CO2 measurements, and (c) the joint data set of GOSAT and surface flask measurements. We validate GOSAT-only inversions against the NOAA surface flask network and joint inversions against CONTRAIL and other aircraft campaigns. We see that inverted fluxes from a GOSAT-only inversion are consistent with fluxes from a stations-only inversion, reaffirming the low biases in SRON retrievals. From the joint inversion, we estimate the amount of added constraints upon adding GOSAT total column measurements to existing surface layer measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purdie, Thomas G., E-mail: Tom.Purdie@rmp.uhn.on.ca; Department of Radiation Oncology, University of Toronto, Toronto, Ontario; Techna Institute, University Health Network, Toronto, Ontario
Purpose: To demonstrate the large-scale clinical implementation and performance of an automated treatment planning methodology for tangential breast intensity modulated radiation therapy (IMRT). Methods and Materials: Automated planning was used to prospectively plan tangential breast IMRT treatment for 1661 patients between June 2009 and November 2012. The automated planning method emulates the manual steps performed by the user during treatment planning, including anatomical segmentation, beam placement, optimization, dose calculation, and plan documentation. The user specifies clinical requirements of the plan to be generated through a user interface embedded in the planning system. The automated method uses heuristic algorithms to definemore » and simplify the technical aspects of the treatment planning process. Results: Automated planning was used in 1661 of 1708 patients receiving tangential breast IMRT during the time interval studied. Therefore, automated planning was applicable in greater than 97% of cases. The time for treatment planning using the automated process is routinely 5 to 6 minutes on standard commercially available planning hardware. We have shown a consistent reduction in plan rejections from plan reviews through the standard quality control process or weekly quality review multidisciplinary breast rounds as we have automated the planning process for tangential breast IMRT. Clinical plan acceptance increased from 97.3% using our previous semiautomated inverse method to 98.9% using the fully automated method. Conclusions: Automation has become the routine standard method for treatment planning of tangential breast IMRT at our institution and is clinically feasible on a large scale. The method has wide clinical applicability and can add tremendous efficiency, standardization, and quality to the current treatment planning process. The use of automated methods can allow centers to more rapidly adopt IMRT and enhance access to the documented improvements in care for breast cancer patients, using technologies that are widely available and already in clinical use.« less
Purdie, Thomas G; Dinniwell, Robert E; Fyles, Anthony; Sharpe, Michael B
2014-11-01
To demonstrate the large-scale clinical implementation and performance of an automated treatment planning methodology for tangential breast intensity modulated radiation therapy (IMRT). Automated planning was used to prospectively plan tangential breast IMRT treatment for 1661 patients between June 2009 and November 2012. The automated planning method emulates the manual steps performed by the user during treatment planning, including anatomical segmentation, beam placement, optimization, dose calculation, and plan documentation. The user specifies clinical requirements of the plan to be generated through a user interface embedded in the planning system. The automated method uses heuristic algorithms to define and simplify the technical aspects of the treatment planning process. Automated planning was used in 1661 of 1708 patients receiving tangential breast IMRT during the time interval studied. Therefore, automated planning was applicable in greater than 97% of cases. The time for treatment planning using the automated process is routinely 5 to 6 minutes on standard commercially available planning hardware. We have shown a consistent reduction in plan rejections from plan reviews through the standard quality control process or weekly quality review multidisciplinary breast rounds as we have automated the planning process for tangential breast IMRT. Clinical plan acceptance increased from 97.3% using our previous semiautomated inverse method to 98.9% using the fully automated method. Automation has become the routine standard method for treatment planning of tangential breast IMRT at our institution and is clinically feasible on a large scale. The method has wide clinical applicability and can add tremendous efficiency, standardization, and quality to the current treatment planning process. The use of automated methods can allow centers to more rapidly adopt IMRT and enhance access to the documented improvements in care for breast cancer patients, using technologies that are widely available and already in clinical use. Copyright © 2014 Elsevier Inc. All rights reserved.
Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow
NASA Astrophysics Data System (ADS)
Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar
2014-09-01
We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.
Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willcox, Karen; Marzouk, Youssef
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-13
... and persistent temperature inversions occurred, were specifically identified as a key source of PM... effect' resulting from an inversion that has a stagnant air pollution mass surrounded by the Oquirrh...
Inverse problems and optimal experiment design in unsteady heat transfer processes identification
NASA Technical Reports Server (NTRS)
Artyukhin, Eugene A.
1991-01-01
Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.
Raben, Adam; Rusthoven, Kyle E; Sarkar, Abrihup; Glick, Andrew; Benge, Bruce; Jacobs, Dayee; Raben, David
2009-01-01
Favorable dosimetric results have been reported using intraoperative inverse optimization (IO) for permanent prostate brachytherapy. The clinical implications of these improvements in dosimetry are unclear. We review toxicity and early biochemical outcomes for patients implanted using IO technique. Between 2001 and 2007, 165 patients received permanent prostate implants using real-time IO and had >/=3 months of followup. Dose constraints for inverse planning were: the prostate volume receiving 100% of the prescription dose [prostate V(100)] was >95%; the dose received by 90% of the gland [prostate D(90)] was within the 140-180 by dose range; the volume of urethra receiving 150% of the prescription dose [urethra V(150)] was <30%; and the volume of rectal wall receiving 110% of the prescription dose [rectal V(110)] was <1.0 cc. Toxicity was prospectively scored using the Radiation Therapy Oncology Group toxicity scale and the International Prostate Symptom Score questionnaire. Biochemical control was determined using the nadir + 2 ng/mL definition. Mean followup was 30 months (range, 6-63 months). Risk classification was low risk in 89% and intermediate risk in 11%. Iodine-125 sources were used for 161 implants and palladium-103 sources for four implants. The median number of seeds and total activity implanted were 61 and 999 MBq, respectively, for a median prostate volume of 33.6 cc. Late GU and GI morbidity was uncommon. Among patients with at least 24 months followup, 16% had persistent Grade 2-3 urinary morbidity. Grade 2 rectal bleeding occurred in 1 patient (0.6%). Biochemical failure has occurred in only 4 patients at last followup. IO technique for prostate brachytherapy is associated with low rates of late morbidity and excellent early biochemical control. Additionally, the number of seeds and total implanted activity required to achieve a high-quality implant are lower compared with historical controls.
Real-time path planning and autonomous control for helicopter autorotation
NASA Astrophysics Data System (ADS)
Yomchinda, Thanan
Autorotation is a descending maneuver that can be used to recover helicopters in the event of total loss of engine power; however it is an extremely difficult and complex maneuver. The objective of this work is to develop a real-time system which provides full autonomous control for autorotation landing of helicopters. The work includes the development of an autorotation path planning method and integration of the path planner with a primary flight control system. The trajectory is divided into three parts: entry, descent and flare. Three different optimization algorithms are used to generate trajectories for each of these segments. The primary flight control is designed using a linear dynamic inversion control scheme, and a path following control law is developed to track the autorotation trajectories. Details of the path planning algorithm, trajectory following control law, and autonomous autorotation system implementation are presented. The integrated system is demonstrated in real-time high fidelity simulations. Results indicate feasibility of the capability of the algorithms to operate in real-time and of the integrated systems ability to provide safe autorotation landings. Preliminary simulations of autonomous autorotation on a small UAV are presented which will lead to a final hardware demonstration of the algorithms.
Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization
NASA Astrophysics Data System (ADS)
Yamagishi, Masao; Yamada, Isao
2017-04-01
Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.
NASA Astrophysics Data System (ADS)
Liu, Yi; Zhang, He; Liu, Siwei; Lin, Fuchang
2018-05-01
The J-A (Jiles-Atherton) model is widely used to describe the magnetization characteristics of magnetic cores in a low-frequency alternating field. However, this model is deficient in the quantitative analysis of the eddy current loss and residual loss in a high-frequency magnetic field. Based on the decomposition of magnetization intensity, an inverse J-A model is established which uses magnetic flux density B as an input variable. Static and dynamic core losses under high frequency excitation are separated based on the inverse J-A model. Optimized parameters of the inverse J-A model are obtained based on particle swarm optimization. The platform for the pulsed magnetization characteristic test is designed and constructed. The hysteresis curves of ferrite and Fe-based nanocrystalline cores at high magnetization rates are measured. The simulated and measured hysteresis curves are presented and compared. It is found that the inverse J-A model can be used to describe the magnetization characteristics at high magnetization rates and to separate the static loss and dynamic loss accurately.
Patient-specific dosimetric endpoints based treatment plan quality control in radiotherapy.
Song, Ting; Staub, David; Chen, Mingli; Lu, Weiguo; Tian, Zhen; Jia, Xun; Li, Yongbao; Zhou, Linghong; Jiang, Steve B; Gu, Xuejun
2015-11-07
In intensity modulated radiotherapy (IMRT), the optimal plan for each patient is specific due to unique patient anatomy. To achieve such a plan, patient-specific dosimetric goals reflecting each patient's unique anatomy should be defined and adopted in the treatment planning procedure for plan quality control. This study is to develop such a personalized treatment plan quality control tool by predicting patient-specific dosimetric endpoints (DEs). The incorporation of patient specific DEs is realized by a multi-OAR geometry-dosimetry model, capable of predicting optimal DEs based on the individual patient's geometry. The overall quality of a treatment plan is then judged with a numerical treatment plan quality indicator and characterized as optimal or suboptimal. Taking advantage of clinically available prostate volumetric modulated arc therapy (VMAT) treatment plans, we built and evaluated our proposed plan quality control tool. Using our developed tool, six of twenty evaluated plans were identified as sub-optimal plans. After plan re-optimization, these suboptimal plans achieved better OAR dose sparing without sacrificing the PTV coverage, and the dosimetric endpoints of the re-optimized plans agreed well with the model predicted values, which validate the predictability of the proposed tool. In conclusion, the developed tool is able to accurately predict optimally achievable DEs of multiple OARs, identify suboptimal plans, and guide plan optimization. It is a useful tool for achieving patient-specific treatment plan quality control.
Application of a stochastic inverse to the geophysical inverse problem
NASA Technical Reports Server (NTRS)
Jordan, T. H.; Minster, J. B.
1972-01-01
The inverse problem for gross earth data can be reduced to an undertermined linear system of integral equations of the first kind. A theory is discussed for computing particular solutions to this linear system based on the stochastic inverse theory presented by Franklin. The stochastic inverse is derived and related to the generalized inverse of Penrose and Moore. A Backus-Gilbert type tradeoff curve is constructed for the problem of estimating the solution to the linear system in the presence of noise. It is shown that the stochastic inverse represents an optimal point on this tradeoff curve. A useful form of the solution autocorrelation operator as a member of a one-parameter family of smoothing operators is derived.
Taylor, Jeremy M G; Cheng, Wenting; Foster, Jared C
2015-03-01
A recent article (Zhang et al., 2012, Biometrics 168, 1010-1018) compares regression based and inverse probability based methods of estimating an optimal treatment regime and shows for a small number of covariates that inverse probability weighted methods are more robust to model misspecification than regression methods. We demonstrate that using models that fit the data better reduces the concern about non-robustness for the regression methods. We extend the simulation study of Zhang et al. (2012, Biometrics 168, 1010-1018), also considering the situation of a larger number of covariates, and show that incorporating random forests into both regression and inverse probability weighted based methods improves their properties. © 2014, The International Biometric Society.
Exploring chemical reaction mechanisms through harmonic Fourier beads path optimization.
Khavrutskii, Ilja V; Smith, Jason B; Wallqvist, Anders
2013-10-28
Here, we apply the harmonic Fourier beads (HFB) path optimization method to study chemical reactions involving covalent bond breaking and forming on quantum mechanical (QM) and hybrid QM∕molecular mechanical (QM∕MM) potential energy surfaces. To improve efficiency of the path optimization on such computationally demanding potentials, we combined HFB with conjugate gradient (CG) optimization. The combined CG-HFB method was used to study two biologically relevant reactions, namely, L- to D-alanine amino acid inversion and alcohol acylation by amides. The optimized paths revealed several unexpected reaction steps in the gas phase. For example, on the B3LYP∕6-31G(d,p) potential, we found that alanine inversion proceeded via previously unknown intermediates, 2-iminopropane-1,1-diol and 3-amino-3-methyloxiran-2-ol. The CG-HFB method accurately located transition states, aiding in the interpretation of complex reaction mechanisms. Thus, on the B3LYP∕6-31G(d,p) potential, the gas phase activation barriers for the inversion and acylation reactions were 50.5 and 39.9 kcal∕mol, respectively. These barriers determine the spontaneous loss of amino acid chirality and cleavage of peptide bonds in proteins. We conclude that the combined CG-HFB method further advances QM and QM∕MM studies of reaction mechanisms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Learnings Objectives: Although brachytherapy is the oldest form of radiation therapy, the rapid advancement of the methods of dose calculation, treatment planning and treatment delivery pushes us to keep updating our knowledge and experience to new procedures all the time. Our purpose is to present the newest applicators used in Accelerated Partial Breast Irradiation (APBI) and the techniques of using them for a maximum effective treatment. Our objective will be to get the user familiar with the Savi, Contura and ML Mammosite from the detailed description and measurements to cavity eval and choice or size, to acceptance tests and usemore » of each. At the end of the session the attendants will be able to assist at the scanning of the patient for the first treatment, decide on the proper localization and immobilization devices, import the scans in the treatment planning system, perform the structure segmentation, reconstruct the catheters and develop a treatment plan using inverse planning (IPSA) or volume optimization. The attendant should be able to evaluate the quality of a treatment plan according to the ABS protocols and B39 after this session. Our goal is that all the attendants to gain knowledge of all the quality assurance procedures required to be performed prior to a treatment, at the beginning of a treatment day, weekly, monthly and annualy on the remote afterloader, the treatment planning system and the secondary check system. We will provide tips for a consistent treatment delivery of the 10 fractions in a BID (twice daily) regimen.« less
NASA Astrophysics Data System (ADS)
Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.
2012-12-01
We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.
Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin
2016-04-01
Geologists' interpretations about the Earth typically involve distinct rock units with contacts (interfaces) between them. In contrast, standard minimum-structure geophysical inversions are performed on meshes of space-filling cells (typically prisms or tetrahedra) and recover smoothly varying physical property distributions that are inconsistent with typical geological interpretations. There are several approaches through which mesh-based minimum-structure geophysical inversion can help recover models with some of the desired characteristics. However, a more effective strategy may be to consider two fundamentally different types of inversions: lithological and surface geometry inversions. A major advantage of these two inversion approaches is that joint inversion of multiple types of geophysical data is greatly simplified. In a lithological inversion, the subsurface is discretized into a mesh and each cell contains a particular rock type. A lithological model must be translated to a physical property model before geophysical data simulation. Each lithology may map to discrete property values or there may be some a priori probability density function associated with the mapping. Through this mapping, lithological inverse problems limit the parameter domain and consequently reduce the non-uniqueness from that presented by standard mesh-based inversions that allow physical property values on continuous ranges. Furthermore, joint inversion is greatly simplified because no additional mathematical coupling measure is required in the objective function to link multiple physical property models. In a surface geometry inversion, the model comprises wireframe surfaces representing contacts between rock units. This parameterization is then fully consistent with Earth models built by geologists, which in 3D typically comprise wireframe contact surfaces of tessellated triangles. As for the lithological case, the physical properties of the units lying between the contact surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.
Leaf position optimization for step-and-shoot IMRT.
De Gersem, W; Claus, F; De Wagter, C; Van Duyse, B; De Neve, W
2001-12-01
To describe the theoretical basis, the algorithm, and implementation of a tool that optimizes segment shapes and weights for step-and-shoot intensity-modulated radiation therapy delivered by multileaf collimators. The tool, called SOWAT (Segment Outline and Weight Adapting Tool) is applied to a set of segments, segment weights, and corresponding dose distribution, computed by an external dose computation engine. SOWAT evaluates the effects of changing the position of each collimating leaf of each segment on an objective function, as follows. Changing a leaf position causes a change in the segment-specific dose matrix, which is calculated by a fast dose computation algorithm. A weighted sum of all segment-specific dose matrices provides the dose distribution and allows computation of the value of the objective function. Only leaf position changes that comply with the multileaf collimator constraints are evaluated. Leaf position changes that tend to decrease the value of the objective function are retained. After several possible positions have been evaluated for all collimating leaves of all segments, an external dose engine recomputes the dose distribution, based on the adapted leaf positions and weights. The plan is evaluated. If the plan is accepted, a segment sequencer is used to make the prescription files for the treatment machine. Otherwise, the user can restart SOWAT using the new set of segments, segment weights, and corresponding dose distribution. The implementation was illustrated using two example cases. The first example is a T1N0M0 supraglottic cancer case that was distributed as a multicenter planning exercise by investigators from Rotterdam, The Netherlands. The exercise involved a two-phase plan. Phase 1 involved the delivery of 46 Gy to a concave-shaped planning target volume (PTV) consisting of the primary tumor volume and the elective lymph nodal regions II-IV on both sides of the neck. Phase 2 involved a boost of 24 Gy to the primary tumor region only. SOWAT was applied to the Phase 1 plan. Parotid sparing was a planning goal. The second implementation example is an ethmoid sinus cancer case, planned with the intent of bilateral visus sparing. The median PTV prescription dose was 70 Gy with a maximum dose constraint to the optic pathway structures of 60 Gy. The initial set of segments, segment weights, and corresponding dose distribution were obtained, respectively, by an anatomy-based segmentation tool, a segment weight optimization tool, and a differential scatter-air ratio dose computation algorithm as external dose engine. For the supraglottic case, this resulted in a plan that proved to be comparable to the plans obtained at the other institutes by forward or inverse planning techniques. After using SOWAT, the minimum PTV dose and PTV dose homogeneity increased; the maximum dose to the spinal cord decreased from 38 Gy to 32 Gy. The left parotid mean dose decreased from 22 Gy to 19 Gy and the right parotid mean dose from 20 to 18 Gy. For the ethmoid sinus case, the target homogeneity increased by leaf position optimization, together with a better sparing of the optical tracts. By using SOWAT, the plans improved with respect to all plan evaluation end points. Compliance with the multileaf collimator constraints is guaranteed. The treatment delivery time remains almost unchanged, because no additional segments are created.
Inverse Statistics and Asset Allocation Efficiency
NASA Astrophysics Data System (ADS)
Bolgorian, Meysam
In this paper using inverse statistics analysis, the effect of investment horizon on the efficiency of portfolio selection is examined. Inverse statistics analysis is a general tool also known as probability distribution of exit time that is used for detecting the distribution of the time in which a stochastic process exits from a zone. This analysis was used in Refs. 1 and 2 for studying the financial returns time series. This distribution provides an optimal investment horizon which determines the most likely horizon for gaining a specific return. Using samples of stocks from Tehran Stock Exchange (TSE) as an emerging market and S&P 500 as a developed market, effect of optimal investment horizon in asset allocation is assessed. It is found that taking into account the optimal investment horizon in TSE leads to more efficiency for large size portfolios while for stocks selected from S&P 500, regardless of portfolio size, this strategy does not only not produce more efficient portfolios, but also longer investment horizons provides more efficiency.
On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction.
Crop, F; Van Rompaye, B; Paelinck, L; Vakaet, L; Thierens, H; De Wagter, C
2008-07-21
The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry.
Optimizer convergence and local minima errors and their clinical importance
NASA Astrophysics Data System (ADS)
Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.
2003-09-01
Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.
Optimizer convergence and local minima errors and their clinical importance.
Jeraj, Robert; Wu, Chuan; Mackie, Thomas R
2003-09-07
Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.
Cao, Wenhua; Lim, Gino; Li, Xiaoqiang; Li, Yupeng; Zhu, X. Ronald; Zhang, Xiaodong
2014-01-01
The purpose of this study is to investigate the feasibility and impact of incorporating deliverable monitor unit (MU) constraints into spot intensity optimization in intensity modulated proton therapy (IMPT) treatment planning. The current treatment planning system (TPS) for IMPT disregards deliverable MU constraints in the spot intensity optimization (SIO) routine. It performs a post-processing procedure on an optimized plan to enforce deliverable MU values that are required by the spot scanning proton delivery system. This procedure can create a significant dose distribution deviation between the optimized and post-processed deliverable plans, especially when small spot spacings are used. In this study, we introduce a two-stage linear programming (LP) approach to optimize spot intensities and constrain deliverable MU values simultaneously, i.e., a deliverable spot intensity optimization (DSIO) model. Thus, the post-processing procedure is eliminated and the associated optimized plan deterioration can be avoided. Four prostate cancer cases at our institution were selected for study and two parallel opposed beam angles were planned for all cases. A quadratic programming (QP) based model without MU constraints, i.e., a conventional spot intensity optimization (CSIO) model, was also implemented to emulate the commercial TPS. Plans optimized by both the DSIO and CSIO models were evaluated for five different settings of spot spacing from 3 mm to 7 mm. For all spot spacings, the DSIO-optimized plans yielded better uniformity for the target dose coverage and critical structure sparing than did the CSIO-optimized plans. With reduced spot spacings, more significant improvements in target dose uniformity and critical structure sparing were observed in the DSIO- than in the CSIO-optimized plans. Additionally, better sparing of the rectum and bladder was achieved when reduced spacings were used for the DSIO-optimized plans. The proposed DSIO approach ensures the deliverability of optimized IMPT plans that take into account MU constraints. This eliminates the post-processing procedure required by the TPS as well as the resultant deteriorating effect on ultimate dose distributions. This approach therefore allows IMPT plans to adopt all possible spot spacings optimally. Moreover, dosimetric benefits can be achieved using smaller spot spacings. PMID:23835656
A dosimetric comparison of {sup 169}Yb versus {sup 192}Ir for HDR prostate brachytherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lymperopoulou, G.; Papagiannis, P.; Sakelliou, L.
2005-12-15
For the purpose of evaluating the use of {sup 169}Yb for prostate High Dose Rate brachytherapy (HDR), a hypothetical {sup 169}Yb source is assumed with the exact same design of the new microSelectron source replacing the {sup 192}Ir active core by pure {sup 169}Yb metal. Monte Carlo simulation is employed for the full dosimetric characterization of both sources and results are compared following the AAPM TG-43 dosimetric formalism. Monte Carlo calculated dosimetry results are incorporated in a commercially available treatment planning system (SWIFT{sup TM}), which features an inverse treatment planning option based on a multiobjective dose optimization engine. The qualitymore » of prostate HDR brachytherapy using the real {sup 192}Ir and hypothetical {sup 169}Yb source is compared in a comprehensive analysis of different prostate implants in terms of the multiobjective dose optimization solutions as well as treatment quality indices such as Dose Volume Histograms (DVH) and the Conformal Index (COIN). Given that scattering overcompensates for absorption in intermediate photon energies and distances in the range of interest to prostate HDR brachytherapy, {sup 169}Yb proves at least equivalent to {sup 192}Ir irrespective of prostate volume. This has to be evaluated in view of the shielding requirements for the {sup 169}Yb energies that are minimal relative to that for {sup 192}Ir.« less
Nasrallah, Fatima A; Lee, Eugene L Q; Chuang, Kai-Hsiang
2012-11-01
Arterial spin labeling (ASL) MRI provides a noninvasive method to image perfusion, and has been applied to map neural activation in the brain. Although pulsed labeling methods have been widely used in humans, continuous ASL with a dedicated neck labeling coil is still the preferred method in rodent brain functional MRI (fMRI) to maximize the sensitivity and allow multislice acquisition. However, the additional hardware is not readily available and hence its application is limited. In this study, flow-sensitive alternating inversion recovery (FAIR) pulsed ASL was optimized for fMRI of rat brain. A practical challenge of FAIR is the suboptimal global inversion by the transmit coil of limited dimensions, which results in low effective labeling. By using a large volume transmit coil and proper positioning to optimize the body coverage, the perfusion signal was increased by 38.3% compared with positioning the brain at the isocenter. An additional 53.3% gain in signal was achieved using optimized repetition and inversion times compared with a long TR. Under electrical stimulation to the forepaws, a perfusion activation signal change of 63.7 ± 6.3% can be reliably detected in the primary somatosensory cortices using single slice or multislice echo planar imaging at 9.4 T. This demonstrates the potential of using pulsed ASL for multislice perfusion fMRI in functional and pharmacological applications in rat brain. Copyright © 2012 John Wiley & Sons, Ltd.
Novel Application of Helical Tomotherapy in Whole Skull Palliative Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodrigues, George; Yartsev, Slav; Coad, Terry
2008-01-01
Helical tomotherapy (HT) is a radiation planning/delivery platform that combines inversely planned IMRT with on-board megavoltage imaging. A unique HT radiotherapy whole skull brain sparing technique is described in a patient with metastatic prostate cancer. An inverse HT plan and an accompanying back-up conventional lateral 6-MV parallel opposed pair (POP) plan with corresponding isodose distributions and dose-volume histograms (DVH) were created and assessed prior to initiation of therapy. Plans conforming to the planning treatment volume (PTV) with significant sparing of brain, optic nerve, and eye were created. Dose heterogeneity to the PTV target was slightly higher in the HT planmore » compared to the back-up POP plan. Conformal sparing of brain, optic nerve, and eye was achieved by the HT plan. Similar lens and brain stem/spinal cord doses were seen with both plans. Prospective clinical evaluation with relevant end points (quality of life, symptom relief) are required to confirm the potential benefits of highly conformal therapies applied to palliative situations such as this case.« less
Inverse lithography using sparse mask representations
NASA Astrophysics Data System (ADS)
Ionescu, Radu C.; Hurley, Paul; Apostol, Stefan
2015-03-01
We present a novel optimisation algorithm for inverse lithography, based on optimization of the mask derivative, a domain inherently sparse, and for rectilinear polygons, invertible. The method is first developed assuming a point light source, and then extended to general incoherent sources. What results is a fast algorithm, producing manufacturable masks (the search space is constrained to rectilinear polygons), and flexible (specific constraints such as minimal line widths can be imposed). One inherent trick is to treat polygons as continuous entities, thus making aerial image calculation extremely fast and accurate. Requirements for mask manufacturability can be integrated in the optimization without too much added complexity. We also explain how to extend the scheme for phase-changing mask optimization.
An optimal resolved rate law for kindematically redundant manipulators
NASA Technical Reports Server (NTRS)
Bourgeois, B. J.
1987-01-01
The resolved rate law for a manipulator provides the instantaneous joint rates required to satisfy a given instantaneous hand motion. When the joint space has more degrees of freedom than the task space, the manipulator is kinematically redundant and the kinematic rate equations are underdetermined. These equations can be locally optimized, but the resulting pseudo-inverse solution was found to cause large joint rates in some case. A weighting matrix in the locally optimized (pseudo-inverse) solution is dynamically adjusted to control the joint motion as desired. Joint reach limit avoidance is demonstrated in a kinematically redundant planar arm model. The treatment is applicable to redundant manipulators with any number of revolute joints and to nonplanar manipulators.
Joint Geophysical Inversion With Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelievre, P. G.; Bijani, R.; Farquharson, C. G.
2015-12-01
Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.
Model-based assist feature insertion for sub-40nm memory device
NASA Astrophysics Data System (ADS)
Suh, Sungsoo; Lee, Suk-joo; Choi, Seong-woon; Lee, Sung-Woo; Park, Chan-hoon
2009-04-01
Many issues need to be resolved for a production-worthy model based assist feature insertion flow for single and double exposure patterning process to extend low k1 process at 193 nm immersion technology. Model based assist feature insertion is not trivial to implement either for single and double exposure patterning compared to rule based methods. As shown in Fig. 1, pixel based mask inversion technology in itself has difficulties in mask writing and inspection although it presents as one of key technology to extend single exposure for contact layer. Thus far, inversion technology is tried as a cooptimization of target mask to simultaneously generate optimized main and sub-resolution assists features for a desired process window. Alternatively, its technology can also be used to optimize for a target feature after an assist feature types are inserted in order to simplify the mask complexity. Simplification of inversion mask is one of major issue with applying inversion technology to device development even if a smaller mask feature can be fabricated since the mask writing time is also a major factor. As shown in Figure 2, mask writing time may be a limiting factor in determining whether or not an inversion solution is viable. It can be reasoned that increased number of shot counts relates to increase in margin for inversion methodology. On the other hand, there is a limit on how complex a mask can be in order to be production worthy. There is also source and mask co-optimization which influences the final mask patterns and assist feature sizes and positions for a given target. In this study, we will discuss assist feature insertion methods for sub 40-nm technology.
NASA Astrophysics Data System (ADS)
Son, J.; Medina-Cetina, Z.
2017-12-01
We discuss the comparison between deterministic and stochastic optimization approaches to the nonlinear geophysical full-waveform inverse problem, based on the seismic survey data from Mississippi Canyon in the Northern Gulf of Mexico. Since the subsea engineering and offshore construction projects actively require reliable ground models from various site investigations, the primary goal of this study is to reconstruct the accurate subsurface information of the soil and rock material profiles under the seafloor. The shallow sediment layers have naturally formed heterogeneous formations which may cause unwanted marine landslides or foundation failures of underwater infrastructure. We chose the quasi-Newton and simulated annealing as deterministic and stochastic optimization algorithms respectively. Seismic forward modeling based on finite difference method with absorbing boundary condition implements the iterative simulations in the inverse modeling. We briefly report on numerical experiments using a synthetic data as an offshore ground model which contains shallow artificial target profiles of geomaterials under the seafloor. We apply the seismic migration processing and generate Voronoi tessellation on two-dimensional space-domain to improve the computational efficiency of the imaging stratigraphical velocity model reconstruction. We then report on the detail of a field data implementation, which shows the complex geologic structures in the Northern Gulf of Mexico. Lastly, we compare the new inverted image of subsurface site profiles in the space-domain with the previously processed seismic image in the time-domain at the same location. Overall, stochastic optimization for seismic inversion with migration and Voronoi tessellation show significant promise to improve the subsurface imaging of ground models and improve the computational efficiency required for the full waveform inversion. We anticipate that by improving the inversion process of shallow layers from geophysical data will better support the offshore site investigation.
Design optimization of axial flow hydraulic turbine runner: Part I - an improved Q3D inverse method
NASA Astrophysics Data System (ADS)
Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji
2002-06-01
With the aim of constructing a comprehensive design optimization procedure of axial flow hydraulic turbine, an improved quasi-three-dimensional inverse method has been proposed from the viewpoint of system and a set of rotational flow governing equations as well as a blade geometry design equation has been derived. The computation domain is firstly taken from the inlet of guide vane to the far outlet of runner blade in the inverse method and flows in different regions are solved simultaneously. So the influence of wicket gate parameters on the runner blade design can be considered and the difficulty to define the flow condition at the runner blade inlet is surmounted. As a pre-computation of initial blade design on S2m surface is newly adopted, the iteration of S1 and S2m surfaces has been reduced greatly and the convergence of inverse computation has been improved. The present model has been applied to the inverse computation of a Kaplan turbine runner. Experimental results and the direct flow analysis have proved the validation of inverse computation. Numerical investigations show that a proper enlargement of guide vane distribution diameter is advantageous to improve the performance of axial hydraulic turbine runner. Copyright
Solving geosteering inverse problems by stochastic Hybrid Monte Carlo method
Shen, Qiuyang; Wu, Xuqing; Chen, Jiefu; ...
2017-11-20
The inverse problems arise in almost all fields of science where the real-world parameters are extracted from a set of measured data. The geosteering inversion plays an essential role in the accurate prediction of oncoming strata as well as a reliable guidance to adjust the borehole position on the fly to reach one or more geological targets. This mathematical treatment is not easy to solve, which requires finding an optimum solution among a large solution space, especially when the problem is non-linear and non-convex. Nowadays, a new generation of logging-while-drilling (LWD) tools has emerged on the market. The so-called azimuthalmore » resistivity LWD tools have azimuthal sensitivity and a large depth of investigation. Hence, the associated inverse problems become much more difficult since the earth model to be inverted will have more detailed structures. The conventional deterministic methods are incapable to solve such a complicated inverse problem, where they suffer from the local minimum trap. Alternatively, stochastic optimizations are in general better at finding global optimal solutions and handling uncertainty quantification. In this article, we investigate the Hybrid Monte Carlo (HMC) based statistical inversion approach and suggest that HMC based inference is more efficient in dealing with the increased complexity and uncertainty faced by the geosteering problems.« less
A Gauss-Newton full-waveform inversion in PML-truncated domains using scalar probing waves
NASA Astrophysics Data System (ADS)
Pakravan, Alireza; Kang, Jun Won; Newtson, Craig M.
2017-12-01
This study considers the characterization of subsurface shear wave velocity profiles in semi-infinite media using scalar waves. Using surficial responses caused by probing waves, a reconstruction of the material profile is sought using a Gauss-Newton full-waveform inversion method in a two-dimensional domain truncated by perfectly matched layer (PML) wave-absorbing boundaries. The PML is introduced to limit the semi-infinite extent of the half-space and to prevent reflections from the truncated boundaries. A hybrid unsplit-field PML is formulated in the inversion framework to enable more efficient wave simulations than with a fully mixed PML. The full-waveform inversion method is based on a constrained optimization framework that is implemented using Karush-Kuhn-Tucker (KKT) optimality conditions to minimize the objective functional augmented by PML-endowed wave equations via Lagrange multipliers. The KKT conditions consist of state, adjoint, and control problems, and are solved iteratively to update the shear wave velocity profile of the PML-truncated domain. Numerical examples show that the developed Gauss-Newton inversion method is accurate enough and more efficient than another inversion method. The algorithm's performance is demonstrated by the numerical examples including the case of noisy measurement responses and the case of reduced number of sources and receivers.
2D Inviscid and Viscous Inverse Design Using Continuous Adjoint and Lax-Wendroff Formulation
NASA Astrophysics Data System (ADS)
Proctor, Camron Lisle
The continuous adjoint (CA) technique for optimization and/or inverse-design of aerodynamic components has seen nearly 30 years of documented success in academia. The benefits of using CA versus a direct sensitivity analysis are shown repeatedly in the literature. However, the use of CA in industry is relatively unheard-of. The sparseness of industry contributions to the field may be attributed to the tediousness of the derivation and/or to the difficulties in implementation due to the lack of well-documented adjoint numerical methods. The focus of this work has been to thoroughly document the techniques required to build a two-dimensional CA inverse-design tool. To this end, this work begins with a short background on computational fluid dynamics (CFD) and the use of optimization tools in conjunction with CFD tools to solve aerodynamic optimization problems. A thorough derivation of the continuous adjoint equations and the accompanying gradient calculations for inviscid and viscous constraining equations follows the introduction. Next, the numerical techniques used for solving the partial differential equations (PDEs) governing the flow equations and the adjoint equations are described. Numerical techniques for the supplementary equations are discussed briefly. Subsequently, a verification of the efficacy of the inverse design tool, for the inviscid adjoint equations as well as possible numerical implementation pitfalls are discussed. The NACA0012 airfoil is used as an initial airfoil and surface pressure distribution and the NACA16009 is used as the desired pressure and vice versa. Using a Savitsky-Golay gradient filter, convergence (defined as a cost function<1E-5) is reached in approximately 220 design iteration using 121 design variables. The inverse-design using inviscid adjoint equations results are followed by the discussion of the viscous inverse design results and techniques used to further the convergence of the optimizer. The relationship between limiting step-size and convergence in a line-search optimization is shown to slightly decrease the final cost function at significant computational cost. A gradient damping technique is presented and shown to increase the convergence rate for the optimization in viscous problems, at a negligible increase in computational cost, but is insufficient to converge the solution. Systematically including adjacent surface vertices in the perturbation of a design variable, also a surface vertex, is shown to affect the convergence capability of the viscous optimizer. Finally, a comparison of using inviscid adjoint equations, as opposed to viscous adjoint equations, on viscous flow is presented, and the inviscid adjoint paired with viscous flow is found to reduce the cost function further than the viscous adjoint for the presented problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro
2016-07-01
This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.
Nonlinear Rayleigh wave inversion based on the shuffled frog-leaping algorithm
NASA Astrophysics Data System (ADS)
Sun, Cheng-Yu; Wang, Yan-Yan; Wu, Dun-Shi; Qin, Xiao-Jun
2017-12-01
At present, near-surface shear wave velocities are mainly calculated through Rayleigh wave dispersion-curve inversions in engineering surface investigations, but the required calculations pose a highly nonlinear global optimization problem. In order to alleviate the risk of falling into a local optimal solution, this paper introduces a new global optimization method, the shuffle frog-leaping algorithm (SFLA), into the Rayleigh wave dispersion-curve inversion process. SFLA is a swarm-intelligence-based algorithm that simulates a group of frogs searching for food. It uses a few parameters, achieves rapid convergence, and is capability of effective global searching. In order to test the reliability and calculation performance of SFLA, noise-free and noisy synthetic datasets were inverted. We conducted a comparative analysis with other established algorithms using the noise-free dataset, and then tested the ability of SFLA to cope with data noise. Finally, we inverted a real-world example to examine the applicability of SFLA. Results from both synthetic and field data demonstrated the effectiveness of SFLA in the interpretation of Rayleigh wave dispersion curves. We found that SFLA is superior to the established methods in terms of both reliability and computational efficiency, so it offers great potential to improve our ability to solve geophysical inversion problems.
Optimization of computations for adjoint field and Jacobian needed in 3D CSEM inversion
NASA Astrophysics Data System (ADS)
Dehiya, Rahul; Singh, Arun; Gupta, Pravin K.; Israil, M.
2017-01-01
We present the features and results of a newly developed code, based on Gauss-Newton optimization technique, for solving three-dimensional Controlled-Source Electromagnetic inverse problem. In this code a special emphasis has been put on representing the operations by block matrices for conjugate gradient iteration. We show how in the computation of Jacobian, the matrix formed by differentiation of system matrix can be made independent of frequency to optimize the operations at conjugate gradient step. The coarse level parallel computing, using OpenMP framework, is used primarily due to its simplicity in implementation and accessibility of shared memory multi-core computing machine to almost anyone. We demonstrate how the coarseness of modeling grid in comparison to source (comp`utational receivers) spacing can be exploited for efficient computing, without compromising the quality of the inverted model, by reducing the number of adjoint calls. It is also demonstrated that the adjoint field can even be computed on a grid coarser than the modeling grid without affecting the inversion outcome. These observations were reconfirmed using an experiment design where the deviation of source from straight tow line is considered. Finally, a real field data inversion experiment is presented to demonstrate robustness of the code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, S; Demanes, J; Kamrava, M
2015-06-15
Purpose: Surface mold applicators can be customized to fit irregular skin surfaces that are difficult to treat with other radiation therapy techniques. Optimal design of customized HDR skin brachytherapy is not well-established. We evaluated the impact of applicator thickness (source to skin distance) on target dosimetry. Methods: 27 patients had 34 treated sites: scalp 4, face 13, extremity 13, and torso 4. Custom applicators were constructed from 5–15 mm thick thermoplastic bolus molded over the skin lesion. A planar array of plastic brachytherapy catheters spaced 5–10 mm apart was affixed to the bolus. CT simulation was used to contour themore » target volume and to determine the prescription depth. Inverse planning simulated annealing followed by graphical optimization was used to plan and deliver 40–56 Gy in 8–16 fractions. Target coverage parameters (D90, Dmean, and V100) and dose uniformity (V110–200, D0.1cc, D1cc, and D2cc) were studied according to target depth (<5mm vs. ≥5mm) and applicator thickness (5–10mm vs. ≥10mm). Results: The average prescription depth was 4.2±1.5mm. The average bolus thickness was 9.2±2.4mm. The median CTV volume was 10.0 cc (0.2–212.4 cc). Similar target coverage was achieved with prescription depths of <5mm and ≥5mm (Dmean = 113.8% vs. 112.4% and D90 = 100.2% vs. 98.3%). The <5mm prescription depth plans were more uniform (D0.1cc = 131.8% vs. 151.8%). Bolus thickness <10mm vs. ≥10mm plans also had similar target coverage (Dmean = 118.2% vs. 110.7% and D90 = 100.1% vs. 99.0%). Applicators ≥10mm thick, however, provide more uniform target dosimetry (D0.1cc = 146.9% vs. 139.5%). Conclusion: Prescription depth is based upon the thickness of the lesion and upon the clinical needs of the patient. Applicators ≥10mm thick provide more dose uniformity than 5–10mm thick applicators. Applicator thickness is an important variable that should be considered during treatment planning to achieve optimal dose uniformity.« less
Liu, Wei; Liao, Zhongxing; Schild, Steven E; Liu, Zhong; Li, Heng; Li, Yupeng; Park, Peter C; Li, Xiaoqiang; Stoker, Joshua; Shen, Jiajian; Keole, Sameer; Anand, Aman; Fatyga, Mirek; Dong, Lei; Sahoo, Narayan; Vora, Sujay; Wong, William; Zhu, X Ronald; Bues, Martin; Mohan, Radhe
2015-01-01
We compared conventionally optimized intensity modulated proton therapy (IMPT) treatment plans against worst-case scenario optimized treatment plans for lung cancer. The comparison of the 2 IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient setup, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. For each of the 9 lung cancer cases, 2 treatment plans were created that accounted for treatment uncertainties in 2 different ways. The first used the conventional method: delivery of prescribed dose to the planning target volume that is geometrically expanded from the internal target volume (ITV). The second used a worst-case scenario optimization scheme that addressed setup and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of changes in patient anatomy attributable to respiratory motion were investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the 2 groups were compared with 2-sided paired Student t tests. Without respiratory motion considered, we affirmed that worst-case scenario optimization is superior to planning target volume-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, worst-case scenario optimization still achieved more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality (D95% ITV, 96.6% vs 96.1% [P = .26]; D5%- D95% ITV, 10.0% vs 12.3% [P = .082]; D1% spinal cord, 31.8% vs 36.5% [P = .035]). Worst-case scenario optimization led to superior solutions for lung IMPT. Despite the fact that worst-case scenario optimization did not explicitly account for respiratory motion, it produced motion-resistant treatment plans. However, further research is needed to incorporate respiratory motion into IMPT robust optimization. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Redundant interferometric calibration as a complex optimization problem
NASA Astrophysics Data System (ADS)
Grobler, T. L.; Bernardi, G.; Kenyon, J. S.; Parsons, A. R.; Smirnov, O. M.
2018-05-01
Observations of the redshifted 21 cm line from the epoch of reionization have recently motivated the construction of low-frequency radio arrays with highly redundant configurations. These configurations provide an alternative calibration strategy - `redundant calibration' - and boost sensitivity on specific spatial scales. In this paper, we formulate calibration of redundant interferometric arrays as a complex optimization problem. We solve this optimization problem via the Levenberg-Marquardt algorithm. This calibration approach is more robust to initial conditions than current algorithms and, by leveraging an approximate matrix inversion, allows for further optimization and an efficient implementation (`redundant STEFCAL'). We also investigated using the preconditioned conjugate gradient method as an alternative to the approximate matrix inverse, but found that its computational performance is not competitive with respect to `redundant STEFCAL'. The efficient implementation of this new algorithm is made publicly available.
NASA Technical Reports Server (NTRS)
Mutambara, Arthur G. O.; Litt, Jonathan
1998-01-01
This report addresses the problem of path planning and control of robotic manipulators which have joint-position limits and joint-rate limits. The manipulators move autonomously and carry out variable tasks in a dynamic, unstructured and cluttered environment. The issue considered is whether the robotic manipulator can achieve all its tasks, and if it cannot, the objective is to identify the closest achievable goal. This problem is formalized and systematically solved for generic manipulators by using inverse kinematics and forward kinematics. Inverse kinematics are employed to define the subspace, workspace and constrained workspace, which are then used to identify when a task is not achievable. The closest achievable goal is obtained by determining weights for an optimal control redistribution scheme. These weights are quantified by using forward kinematics. Conditions leading to joint rate limits are identified, in particular it is established that all generic manipulators have singularities at the boundary of their workspace, while some have loci of singularities inside their workspace. Once the manipulator singularity is identified the command redistribution scheme is used to compute the closest achievable Cartesian velocities. Two examples are used to illustrate the use of the algorithm: A three link planar manipulator and the Unimation Puma 560. Implementation of the derived algorithm is effected by using a supervisory expert system to check whether the desired goal lies in the constrained workspace and if not, to evoke the redistribution scheme which determines the constraint relaxation between end effector position and orientation, and then computes optimal gains.
Downscaling the NOAA CarbonTracker Inversion for North America
NASA Astrophysics Data System (ADS)
Petron, G.; Andrews, A. E.; Chen, H.; Trudeau, M. E.; Eluszkiewicz, J.; Nehrkorn, T.; Henderson, J.; Sweeney, C.; Karion, A.; Masarie, K.; Bruhwiler, L.; Miller, J. B.; Miller, B. R.; Peters, W.; Gourdji, S. M.; Mueller, K. L.; Michalak, A. M.; Tans, P. P.
2011-12-01
We are developing a regional extension of the NOAA CarbonTracker CO2 data-assimilation system for a limited domain covering North America. The regional assimilation will use pre-computed and species-independent atmospheric sampling footprints from a Lagrangian Particle Dispersion Model. Each footprint relates an observed trace gas concentration to upwind fluxes. Once a footprint library has been computed, it can be used repeatedly to quickly test different inversion strategies and, importantly, for inversions using multiple species data (e.g., anthropogenic tracers such as radiocarbon and carbon monoxide and biological tracers such as carbonyl sulfide and stable isotopes of CO2). The current global CarbonTracker (CT) assimilation framework has some important limitations. For example, the assimilation adjusts scaling factors for different vegetation classes within large regions. This means, for example, that all crops within temperate North America are scaled together. There is currently no distinction between crops such as corn and sorghum, which utilize the C4 photosynthesis pathway and C3 crops like soybeans, wheat, cotton, etc. The optimization scales only the net CO2 flux, rather than adjusting photosynthesis and respiration fluxes separately, which limits the flexibility of the inversion and sometimes results in unrealistic diurnal cycles of CO2 flux. The time-series of residuals (CT - observed) for continental sites in North America reveals a persistent excess of CO2 during summer. This summertime positive bias is also apparent in the comparison of CT posterior CO2 with aircraft data and with data from Pacific marine boundary layer sites, suggesting that some of the problem may originate outside of North America. For the regional inversion, we will use footprints from the Stochastic Time-Inverted Lagrangian Transport Model driven by meteorological fields from a customized high-resolution simulation with the Weather Research Forecast (WRF) model. We will use empirically corrected boundary conditions in order to minimize sensitivity to inaccurate fluxes or transport outside of our domain. We plan to test a variety of inversion strategies that effectively exploit CO2 and isotopic data from the relatively dense North American sampling network for 2007-2010.
Engberg, Lovisa; Forsgren, Anders; Eriksson, Kjell; Hårdemark, Björn
2017-06-01
To formulate convex planning objectives of treatment plan multicriteria optimization with explicit relationships to the dose-volume histogram (DVH) statistics used in plan quality evaluation. Conventional planning objectives are designed to minimize the violation of DVH statistics thresholds using penalty functions. Although successful in guiding the DVH curve towards these thresholds, conventional planning objectives offer limited control of the individual points on the DVH curve (doses-at-volume) used to evaluate plan quality. In this study, we abandon the usual penalty-function framework and propose planning objectives that more closely relate to DVH statistics. The proposed planning objectives are based on mean-tail-dose, resulting in convex optimization. We also demonstrate how to adapt a standard optimization method to the proposed formulation in order to obtain a substantial reduction in computational cost. We investigated the potential of the proposed planning objectives as tools for optimizing DVH statistics through juxtaposition with the conventional planning objectives on two patient cases. Sets of treatment plans with differently balanced planning objectives were generated using either the proposed or the conventional approach. Dominance in the sense of better distributed doses-at-volume was observed in plans optimized within the proposed framework. The initial computational study indicates that the DVH statistics are better optimized and more efficiently balanced using the proposed planning objectives than using the conventional approach. © 2017 American Association of Physicists in Medicine.
Time-reversal and Bayesian inversion
NASA Astrophysics Data System (ADS)
Debski, Wojciech
2017-04-01
Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.
Using Fisher Information Criteria for Chemical Sensor Selection via Convex Optimization Methods
2016-11-16
determinant of the inverse Fisher information matrix which is proportional to the global error volume. If a practitioner has a suitable...pro- ceeds from the determinant of the inverse Fisher information matrix which is proportional to the global error volume. If a practitioner has a...design of statistical estimators (i.e. sensors) as their respective inverses act as lower bounds to the (co)variances of the subject estimator, a property
CMOS-based Stochastically Spiking Neural Network for Optimization under Uncertainties
2017-03-01
inverse tangent characteristics at varying input voltage (VIN) [Fig. 3], thereby it is suitable for Kernel function implementation. By varying bias...cost function/constraint variables are generated based on inverse transform on CDF. In Fig. 5, F-1(u) for uniformly distributed random number u [0, 1...extracts random samples of x varying with CDF of F(x). In Fig. 6, we present a successive approximation (SA) circuit to evaluate inverse
Acceleration for 2D time-domain elastic full waveform inversion using a single GPU card
NASA Astrophysics Data System (ADS)
Jiang, Jinpeng; Zhu, Peimin
2018-05-01
Full waveform inversion (FWI) is a challenging procedure due to the high computational cost related to the modeling, especially for the elastic case. The graphics processing unit (GPU) has become a popular device for the high-performance computing (HPC). To reduce the long computation time, we design and implement the GPU-based 2D elastic FWI (EFWI) in time domain using a single GPU card. We parallelize the forward modeling and gradient calculations using the CUDA programming language. To overcome the limitation of relatively small global memory on GPU, the boundary saving strategy is exploited to reconstruct the forward wavefield. Moreover, the L-BFGS optimization method used in the inversion increases the convergence of the misfit function. A multiscale inversion strategy is performed in the workflow to obtain the accurate inversion results. In our tests, the GPU-based implementations using a single GPU device achieve >15 times speedup in forward modeling, and about 12 times speedup in gradient calculation, compared with the eight-core CPU implementations optimized by OpenMP. The test results from the GPU implementations are verified to have enough accuracy by comparing the results obtained from the CPU implementations.
Genetic algorithms and their use in Geophysical Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, Paul B.
1999-04-01
Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show thatmore » certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.« less
Genetic algorithms and their use in geophysical problems
NASA Astrophysics Data System (ADS)
Parker, Paul Bradley
Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or "fittest" models from a "population" and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Also, optimal efficiency is usually achieved with smaller (<50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (>2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Schild, S; Bues, M
Purpose: We compared conventionally optimized intensity-modulated proton therapy (IMPT) treatment plans against the worst-case robustly optimized treatment plans for lung cancer. The comparison of the two IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient set-up, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. Methods: For each of the 9 lung cancer cases two treatment plans were created accounting for treatment uncertainties in two different ways: the first used the conventional Method: delivery of prescribed dose to the planning target volume (PTV) that is geometrically expanded from themore » internal target volume (ITV). The second employed the worst-case robust optimization scheme that addressed set-up and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of the changes in patient anatomy due to respiratory motion was investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the two groups were compared using two-sided paired t-tests. Results: Without respiratory motion considered, we affirmed that worst-case robust optimization is superior to PTV-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, robust optimization still leads to more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality [D95% ITV: 96.6% versus 96.1% (p=0.26), D5% - D95% ITV: 10.0% versus 12.3% (p=0.082), D1% spinal cord: 31.8% versus 36.5% (p =0.035)]. Conclusion: Worst-case robust optimization led to superior solutions for lung IMPT. Despite of the fact that robust optimization did not explicitly account for respiratory motion it produced motion-resistant treatment plans. However, further research is needed to incorporate respiratory motion into IMPT robust optimization.« less
Wilkie, Joel R.; Matuszak, Martha M.; Feng, Mary; Moran, Jean M.; Fraass, Benedick A.
2013-01-01
Purpose: Plan degradation resulting from compromises made to enhance delivery efficiency is an important consideration for intensity modulated radiation therapy (IMRT) treatment plans. IMRT optimization and/or multileaf collimator (MLC) sequencing schemes can be modified to generate more efficient treatment delivery, but the effect those modifications have on plan quality is often difficult to quantify. In this work, the authors present a method for quantitative assessment of overall plan quality degradation due to tradeoffs between delivery efficiency and treatment plan quality, illustrated using comparisons between plans developed allowing different numbers of intensity levels in IMRT optimization and/or MLC sequencing for static segmental MLC IMRT plans. Methods: A plan quality degradation method to evaluate delivery efficiency and plan quality tradeoffs was developed and used to assess planning for 14 prostate and 12 head and neck patients treated with static IMRT. Plan quality was evaluated using a physician's predetermined “quality degradation” factors for relevant clinical plan metrics associated with the plan optimization strategy. Delivery efficiency and plan quality were assessed for a range of optimization and sequencing limitations. The “optimal” (baseline) plan for each case was derived using a clinical cost function with an unlimited number of intensity levels. These plans were sequenced with a clinical MLC leaf sequencer which uses >100 segments, assuring delivered intensities to be within 1% of the optimized intensity pattern. Each patient's optimal plan was also sequenced limiting the number of intensity levels (20, 10, and 5), and then separately optimized with these same numbers of intensity levels. Delivery time was measured for all plans, and direct evaluation of the tradeoffs between delivery time and plan degradation was performed. Results: When considering tradeoffs, the optimal number of intensity levels depends on the treatment site and on the stage in the process at which the levels are limited. The cost of improved delivery efficiency, in terms of plan quality degradation, increased as the number of intensity levels in the sequencer or optimizer decreased. The degradation was more substantial for the head and neck cases relative to the prostate cases, particularly when fewer than 20 intensity levels were used. Plan quality degradation was less severe when the number of intensity levels was limited in the optimizer rather than the sequencer. Conclusions: Analysis of plan quality degradation allows for a quantitative assessment of the compromises in clinical plan quality as delivery efficiency is improved, in order to determine the optimal delivery settings. The technique is based on physician-determined quality degradation factors and can be extended to other clinical situations where investigation of various tradeoffs is warranted. PMID:23822412
Transonic airfoil analysis and design in nonuniform flow
NASA Technical Reports Server (NTRS)
Chang, J. F.; Lan, C. E.
1986-01-01
A nonuniform transonic airfoil code is developed for applications in analysis, inverse design and direct optimization involving an airfoil immersed in propfan slipstream. Problems concerning the numerical stability, convergence, divergence and solution oscillations are discussed. The code is validated by comparing with some known results in incompressible flow. A parametric investigation indicates that the airfoil lift-drag ratio can be increased by decreasing the thickness ratio. A better performance can be achieved if the airfoil is located below the slipstream center. Airfoil characteristics designed by the inverse method and a direct optimization are compared. The airfoil designed with the method of direct optimization exhibits better characteristics and achieves a gain of 22 percent in lift-drag ratio with a reduction of 4 percent in thickness.
An optimal resolved rate law for kinematically redundant manipulators
NASA Technical Reports Server (NTRS)
Bourgeois, B. J.
1987-01-01
The resolved rate law for a manipulator provides the instantaneous joint rates required to satisfy a given instantaneous hand motion. When the joint space has more degrees of freedom than the task space, the manipulator is kinematically redundant and the kinematic rate equations are underdetermined. These equations can be locally optimized, but the resulting pseudo-inverse solution has been found to cause large joint rates in some cases. A weighting matrix in the locally optimized (pseudo-inverse) solution is dynamically adjusted to control the joint motion as desired. Joint reach limit avoidance is demonstrated in a kinematically redundant planar arm model. The treatment is applicable to redundant manipulators with any number of revolute joints and to non-planar manipulators.
Optimal Limited Contingency Planning
NASA Technical Reports Server (NTRS)
Meuleau, Nicolas; Smith, David E.
2003-01-01
For a given problem, the optimal Markov policy over a finite horizon is a conditional plan containing a potentially large number of branches. However, there are applications where it is desirable to strictly limit the number of decision points and branches in a plan. This raises the question of how one goes about finding optimal plans containing only a limited number of branches. In this paper, we present an any-time algorithm for optimal k-contingency planning. It is the first optimal algorithm for limited contingency planning that is not an explicit enumeration of possible contingent plans. By modelling the problem as a partially observable Markov decision process, it implements the Bellman optimality principle and prunes the solution space. We present experimental results of applying this algorithm to some simple test cases.
NASA Astrophysics Data System (ADS)
Li, N.; Yue, X. Y.
2018-03-01
Macroscopic root water uptake models proportional to a root density distribution function (RDDF) are most commonly used to model water uptake by plants. As the water uptake is difficult and labor intensive to measure, these models are often calibrated by inverse modeling. Most previous inversion studies assume RDDF to be constant with depth and time or dependent on only depth for simplification. However, under field conditions, this function varies with type of soil and root growth and thus changes with both depth and time. This study proposes an inverse method to calibrate both spatially and temporally varying RDDF in unsaturated water flow modeling. To overcome the difficulty imposed by the ill-posedness, the calibration is formulated as an optimization problem in the framework of the Tikhonov regularization theory, adding additional constraint to the objective function. Then the formulated nonlinear optimization problem is numerically solved with an efficient algorithm on the basis of the finite element method. The advantage of our method is that the inverse problem is translated into a Tikhonov regularization functional minimization problem and then solved based on the variational construction, which circumvents the computational complexity in calculating the sensitivity matrix involved in many derivative-based parameter estimation approaches (e.g., Levenberg-Marquardt optimization). Moreover, the proposed method features optimization of RDDF without any prior form, which is applicable to a more general root water uptake model. Numerical examples are performed to illustrate the applicability and effectiveness of the proposed method. Finally, discussions on the stability and extension of this method are presented.
Salehi, Mojtaba; Bahreininejad, Ardeshir
2011-08-01
Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously.
Salehi, Mojtaba
2010-01-01
Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously. PMID:21845020
Modelling night-time ecosystem respiration by a constrained source optimization method
Chun-Tai Lai; Gabriel Katul; John Butnor; David Ellsworth; Ram Oren
2002-01-01
One of the main challenges to quantifying ecosystem carbon budgets is properly quantifying the magnitude of night-time ecosystem respiration. Inverse Lagrangian dispersion analysis provides a promising approach to addressing such a problem when measured mean CO2 concentration profiles and nocturnal velocity statistics are available. An inverse...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tho, Lye Mun; Glegg, Martin; Paterson, Jennifer
2006-10-01
Purpose: The relationship between volume of irradiated small bowel (VSB) and acute toxicity in rectal cancer radiotherapy is poorly quantified, particularly in patients receiving concurrent preoperative chemoradiotherapy. Using treatment planning data, we studied a series of such patients. Methods and Materials: Details of 41 patients with locally advanced rectal cancer were reviewed. All received 45 Gy in 25 fractions over 5 weeks, 3-4 fields three-dimensional conformal radiotherapy with daily 5-fluorouracil and folinic acid during Weeks 1 and 5. Toxicity was assessed prospectively in a weekly clinic. Using computed tomography planning software, the VSB was determined at 5 Gy dose intervalsmore » (V{sub 5}, V{sub 1}, etc.). Eight patients with maximal VSB had dosimetry and radiobiological modeling outcomes compared between inverse and conformal three-dimensional planning. Results: VSB correlated strongly with diarrheal severity at every dose level (p < 0.03), with strongest correlation at lowest doses. Median VSB differed significantly between patients experiencing Grade 0-1 and Grade 2-4 diarrhea (p {<=} 0.05). No correlation was found with anorexia, nausea, vomiting, abdominal cramps, age, body mass index, sex, tumor position, or number of fields. Analysis of 8 patients showed that inverse planning reduced median dose to small bowel by 5.1 Gy (p = 0.008) and calculated late normal tissue complication probability (NTCP) by 67% (p = 0.016). We constructed a model using mathematical analysis to predict for acute diarrhea occurring at V{sub 5} and V{sub 15}. Conclusions: A strong dose-volume relationship exists between VSB and acute diarrhea at all dose levels during preoperative chemoradiotherapy. Our constructed model may be useful in predicting toxicity, and this has been derived without the confounding influence of surgical excision on bowel function. Inverse planning can reduce calculated dose to small bowel and late NTCP, and its clinical role warrants further investigation.« less
A fast optimization algorithm for multicriteria intensity modulated proton therapy planning.
Chen, Wei; Craft, David; Madden, Thomas M; Zhang, Kewu; Kooy, Hanne M; Herman, Gabor T
2010-09-01
To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK'S interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.
NASA Astrophysics Data System (ADS)
Gaddy, Melissa R.; Yıldız, Sercan; Unkelbach, Jan; Papp, Dávid
2018-01-01
Spatiotemporal fractionation schemes, that is, treatments delivering different dose distributions in different fractions, can potentially lower treatment side effects without compromising tumor control. This can be achieved by hypofractionating parts of the tumor while delivering approximately uniformly fractionated doses to the surrounding tissue. Plan optimization for such treatments is based on biologically effective dose (BED); however, this leads to computationally challenging nonconvex optimization problems. Optimization methods that are in current use yield only locally optimal solutions, and it has hitherto been unclear whether these plans are close to the global optimum. We present an optimization framework to compute rigorous bounds on the maximum achievable normal tissue BED reduction for spatiotemporal plans. The approach is demonstrated on liver tumors, where the primary goal is to reduce mean liver BED without compromising any other treatment objective. The BED-based treatment plan optimization problems are formulated as quadratically constrained quadratic programming (QCQP) problems. First, a conventional, uniformly fractionated reference plan is computed using convex optimization. Then, a second, nonconvex, QCQP model is solved to local optimality to compute a spatiotemporally fractionated plan that minimizes mean liver BED, subject to the constraints that the plan is no worse than the reference plan with respect to all other planning goals. Finally, we derive a convex relaxation of the second model in the form of a semidefinite programming problem, which provides a rigorous lower bound on the lowest achievable mean liver BED. The method is presented on five cases with distinct geometries. The computed spatiotemporal plans achieve 12-35% mean liver BED reduction over the optimal uniformly fractionated plans. This reduction corresponds to 79-97% of the gap between the mean liver BED of the uniform reference plans and our lower bounds on the lowest achievable mean liver BED. The results indicate that spatiotemporal treatments can achieve substantial reductions in normal tissue dose and BED, and that local optimization techniques provide high-quality plans that are close to realizing the maximum potential normal tissue dose reduction.
Inverse design of multicomponent assemblies
NASA Astrophysics Data System (ADS)
Piñeros, William D.; Lindquist, Beth A.; Jadrich, Ryan B.; Truskett, Thomas M.
2018-03-01
Inverse design can be a useful strategy for discovering interactions that drive particles to spontaneously self-assemble into a desired structure. Here, we extend an inverse design methodology—relative entropy optimization—to determine isotropic interactions that promote assembly of targeted multicomponent phases, and we apply this extension to design interactions for a variety of binary crystals ranging from compact triangular and square architectures to highly open structures with dodecagonal and octadecagonal motifs. We compare the resulting optimized (self- and cross) interactions for the binary assemblies to those obtained from optimization of analogous single-component systems. This comparison reveals that self-interactions act as a "primer" to position particles at approximately correct coordination shell distances, while cross interactions act as the "binder" that refines and locks the system into the desired configuration. For simpler binary targets, it is possible to successfully design self-assembling systems while restricting one of these interaction types to be a hard-core-like potential. However, optimization of both self- and cross interaction types appears necessary to design for assembly of more complex or open structures.
Emperical Tests of Acceptance Sampling Plans
NASA Technical Reports Server (NTRS)
White, K. Preston, Jr.; Johnson, Kenneth L.
2012-01-01
Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).
NASA Astrophysics Data System (ADS)
Qu, Z.; Henze, D. K.; Wang, J.; Xu, X.; Wang, Y.
2017-12-01
Quantifying emissions trends of nitrogen oxides (NOx) and sulfur dioxide (SO2) is important for improving understanding of air pollution and the effectiveness of emission control strategies. We estimate long-term (2005-2016) global (2° x 2.5° resolution) and regional (North America and East Asia at 0.5° x 0.667° resolution) NOx emissions using a recently developed hybrid (mass-balance / 4D-Var) method with GEOS-Chem. NASA standard product and DOMINO retrievals of NO2 column are both used to constrain emissions; comparison of these results provides insight into regions where trends are most robust with respect to retrieval uncertainties, and highlights regions where seemingly significant trends are retrieval-specific. To incorporate chemical interactions among species, we extend our hybrid method to assimilate NO2 and SO2 observations and optimize NOx and SO2 emissions simultaneously. Due to chemical interactions, inclusion of SO2 observations leads to 30% grid-scale differences in posterior NOx emissions compared to those constrained only by NO2 observations. When assimilating and optimizing both species in pseudo observation tests, the sum of the normalized mean squared error (compared to the true emissions) of NOx and SO2 posterior emissions are 54-63% smaller than when observing/constraining a single species. NOx and SO2 emissions are also correlated through the amount of fuel combustion. To incorporate this correlation into the inversion, we optimize seven sector-specific emission scaling factors, including industry, energy, residential, aviation, transportation, shipping and agriculture. We compare posterior emissions from inversions optimizing only species' emissions, only sector-based emissions, and both species' and sector-based emissions. In situ measurements of NOx and SO2 are applied to evaluate the performance of these inversions. The impacts of the inversion on PM2.5 and O3 concentrations and premature deaths are also evaluated.
NASA Astrophysics Data System (ADS)
Chen, Xudong
2010-07-01
This paper proposes a version of the subspace-based optimization method to solve the inverse scattering problem with an inhomogeneous background medium where the known inhomogeneities are bounded in a finite domain. Although the background Green's function at each discrete point in the computational domain is not directly available in an inhomogeneous background scenario, the paper uses the finite element method to simultaneously obtain the Green's function at all discrete points. The essence of the subspace-based optimization method is that part of the contrast source is determined from the spectrum analysis without using any optimization, whereas the orthogonally complementary part is determined by solving a lower dimension optimization problem. This feature significantly speeds up the convergence of the algorithm and at the same time makes it robust against noise. Numerical simulations illustrate the efficacy of the proposed algorithm. The algorithm presented in this paper finds wide applications in nondestructive evaluation, such as through-wall imaging.
An improved grey wolf optimizer algorithm for the inversion of geoelectrical data
NASA Astrophysics Data System (ADS)
Li, Si-Yu; Wang, Shu-Ming; Wang, Peng-Fei; Su, Xiao-Lu; Zhang, Xin-Song; Dong, Zhi-Hui
2018-05-01
The grey wolf optimizer (GWO) is a novel bionics algorithm inspired by the social rank and prey-seeking behaviors of grey wolves. The GWO algorithm is easy to implement because of its basic concept, simple formula, and small number of parameters. This paper develops a GWO algorithm with a nonlinear convergence factor and an adaptive location updating strategy and applies this improved grey wolf optimizer (improved grey wolf optimizer, IGWO) algorithm to geophysical inversion problems using magnetotelluric (MT), DC resistivity and induced polarization (IP) methods. Numerical tests in MATLAB 2010b for the forward modeling data and the observed data show that the IGWO algorithm can find the global minimum and rarely sinks to the local minima. For further study, inverted results using the IGWO are contrasted with particle swarm optimization (PSO) and the simulated annealing (SA) algorithm. The outcomes of the comparison reveal that the IGWO and PSO similarly perform better in counterpoising exploration and exploitation with a given number of iterations than the SA.
Bai, Mingsian R; Tung, Chih-Wei; Lee, Chih-Chung
2005-05-01
An optimal design technique of loudspeaker arrays for cross-talk cancellation with application in three-dimensional audio is presented. An array focusing scheme is presented on the basis of the inverse propagation that relates the transducers to a set of chosen control points. Tikhonov regularization is employed in designing the inverse cancellation filters. An extensive analysis is conducted to explore the cancellation performance and robustness issues. To best compromise the performance and robustness of the cross-talk cancellation system, optimal configurations are obtained with the aid of the Taguchi method and the genetic algorithm (GA). The proposed systems are further justified by physical as well as subjective experiments. The results reveal that large number of loudspeakers, closely spaced configuration, and optimal control point design all contribute to the robustness of cross-talk cancellation systems (CCS) against head misalignment.
A preprocessing strategy for helioseismic inversions
NASA Astrophysics Data System (ADS)
Christensen-Dalsgaard, J.; Thompson, M. J.
1993-05-01
Helioseismic inversion in general involves considerable computational expense, due to the large number of modes that is typically considered. This is true in particular of the widely used optimally localized averages (OLA) inversion methods, which require the inversion of one or more matrices whose order is the number of modes in the set. However, the number of practically independent pieces of information that a large helioseismic mode set contains is very much less than the number of modes, suggesting that the set might first be reduced before the expensive inversion is performed. We demonstrate with a model problem that by first performing a singular value decomposition the original problem may be transformed into a much smaller one, reducing considerably the cost of the OLA inversion and with no significant loss of information.
Niedermayr, Thomas R; Nguyen, Paul L; Murciano-Goroff, Yonina R; Kovtun, Konstantin A; Neubauer Sugar, Emily; Cail, Daniel W; O'Farrell, Desmond A; Hansen, Jorgen L; Cormack, Robert A; Buzurovic, Ivan; Wolfsberger, Luciant T; O'Leary, Michael P; Steele, Graeme S; Devlin, Philip M; Orio, Peter F
2014-01-01
We sought to determine whether placing empty catheters within the prostate and then inverse planning iodine-125 seed locations within those catheters (High Dose Rate-Emulating Low Dose Rate Prostate Brachytherapy [HELP] technique) would improve concordance between planned and achieved dosimetry compared with a standard intraoperative technique. We examined 30 consecutive low dose rate prostate cases performed by standard intraoperative technique of planning followed by needle placement/seed deposition and compared them to 30 consecutive low dose rate prostate cases performed by the HELP technique. The primary endpoint was concordance between planned percentage of the clinical target volume that receives at least 100% of the prescribed dose/dose that covers 90% of the volume of the clinical target volume (V100/D90) and the actual V100/D90 achieved at Postoperative Day 1. The HELP technique had superior concordance between the planned target dosimetry and what was actually achieved at Day 1 and Day 30. Specifically, target D90 at Day 1 was on average 33.7 Gy less than planned for the standard intraoperative technique but was only 10.5 Gy less than planned for the HELP technique (p < 0.001). Day 30 values were 16.6 Gy less vs. 2.2 Gy more than planned, respectively (p = 0.028). Day 1 target V100 was 6.3% less than planned with standard vs. 2.8% less for HELP (p < 0.001). There was no significant difference between the urethral and rectal concordance (all p > 0.05). Placing empty needles first and optimizing the plan to the known positions of the needles resulted in improved concordance between the planned and the achieved dosimetry to the target, possibly because of elimination of errors in needle placement. Copyright © 2014 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gustafsson, C.; Nordström, F.; Persson, E.; Brynolfsson, J.; Olsson, L. E.
2017-04-01
Dosimetric errors in a magnetic resonance imaging (MRI) only radiotherapy workflow may be caused by system specific geometric distortion from MRI. The aim of this study was to evaluate the impact on planned dose distribution and delineated structures for prostate patients, originating from this distortion. A method was developed, in which computer tomography (CT) images were distorted using the MRI distortion field. The displacement map for an optimized MRI treatment planning sequence was measured using a dedicated phantom in a 3 T MRI system. To simulate the distortion aspects of a synthetic CT (electron density derived from MR images), the displacement map was applied to CT images, referred to as distorted CT images. A volumetric modulated arc prostate treatment plan was applied to the original CT and the distorted CT, creating a reference and a distorted CT dose distribution. By applying the inverse of the displacement map to the distorted CT dose distribution, a dose distribution in the same geometry as the original CT images was created. For 10 prostate cancer patients, the dose difference between the reference dose distribution and inverse distorted CT dose distribution was analyzed in isodose level bins. The mean magnitude of the geometric distortion was 1.97 mm for the radial distance of 200-250 mm from isocenter. The mean percentage dose differences for all isodose level bins, were ⩽0.02% and the radiotherapy structure mean volume deviations were <0.2%. The method developed can quantify the dosimetric effects of MRI system specific distortion in a prostate MRI only radiotherapy workflow, separated from dosimetric effects originating from synthetic CT generation. No clinically relevant dose difference or structure deformation was found when 3D distortion correction and high acquisition bandwidth was used. The method could be used for any MRI sequence together with any anatomy of interest.
Gustafsson, C; Nordström, F; Persson, E; Brynolfsson, J; Olsson, L E
2017-04-21
Dosimetric errors in a magnetic resonance imaging (MRI) only radiotherapy workflow may be caused by system specific geometric distortion from MRI. The aim of this study was to evaluate the impact on planned dose distribution and delineated structures for prostate patients, originating from this distortion. A method was developed, in which computer tomography (CT) images were distorted using the MRI distortion field. The displacement map for an optimized MRI treatment planning sequence was measured using a dedicated phantom in a 3 T MRI system. To simulate the distortion aspects of a synthetic CT (electron density derived from MR images), the displacement map was applied to CT images, referred to as distorted CT images. A volumetric modulated arc prostate treatment plan was applied to the original CT and the distorted CT, creating a reference and a distorted CT dose distribution. By applying the inverse of the displacement map to the distorted CT dose distribution, a dose distribution in the same geometry as the original CT images was created. For 10 prostate cancer patients, the dose difference between the reference dose distribution and inverse distorted CT dose distribution was analyzed in isodose level bins. The mean magnitude of the geometric distortion was 1.97 mm for the radial distance of 200-250 mm from isocenter. The mean percentage dose differences for all isodose level bins, were ⩽0.02% and the radiotherapy structure mean volume deviations were <0.2%. The method developed can quantify the dosimetric effects of MRI system specific distortion in a prostate MRI only radiotherapy workflow, separated from dosimetric effects originating from synthetic CT generation. No clinically relevant dose difference or structure deformation was found when 3D distortion correction and high acquisition bandwidth was used. The method could be used for any MRI sequence together with any anatomy of interest.
TU-H-BRC-05: Stereotactic Radiosurgery Optimized with Orthovoltage Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagerstrom, J; Culberson, W; Bender, E
2016-06-15
Purpose: To achieve improved stereotactic radiosurgery (SRS) dose distributions using orthovoltage energy fluence modulation with inverse planning optimization techniques. Methods: A pencil beam model was used to calculate dose distributions from the institution’s orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods as well as measurements with radiochromic film. The orthovoltage photon spectra, modulated by varying thicknesses of attenuating material, were approximated using open-source software. A genetic algorithm search heuristic routine was used to optimize added tungsten filtration thicknesses to approach rectangular function dose distributions at depth. Optimizations were performed for depths of 2.5,more » 5.0, and 7.5 cm, with cone sizes of 8, 10, and 12 mm. Results: Circularly-symmetric tungsten filters were designed based on the results of the optimization, to modulate the orthovoltage beam across the aperture of an SRS cone collimator. For each depth and cone size combination examined, the beam flatness and 80–20% and 90–10% penumbrae were calculated for both standard, open cone-collimated beams as well as for the optimized, filtered beams. For all configurations tested, the modulated beams were able to achieve improved penumbra widths and flatness statistics at depth, with flatness improving between 33 and 52%, and penumbrae improving between 18 and 25% for the modulated beams compared to the unmodulated beams. Conclusion: A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions at depth with improved flatness and penumbrae compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system.« less
Fan, Jiawei; Wang, Jiazhou; Zhang, Zhen; Hu, Weigang
2017-06-01
To develop a new automated treatment planning solution for breast and rectal cancer radiotherapy. The automated treatment planning solution developed in this study includes selection of the iterative optimized training dataset, dose volume histogram (DVH) prediction for the organs at risk (OARs), and automatic generation of clinically acceptable treatment plans. The iterative optimized training dataset is selected by an iterative optimization from 40 treatment plans for left-breast and rectal cancer patients who received radiation therapy. A two-dimensional kernel density estimation algorithm (noted as two parameters KDE) which incorporated two predictive features was implemented to produce the predicted DVHs. Finally, 10 additional new left-breast treatment plans are re-planned using the Pinnacle 3 Auto-Planning (AP) module (version 9.10, Philips Medical Systems) with the objective functions derived from the predicted DVH curves. Automatically generated re-optimized treatment plans are compared with the original manually optimized plans. By combining the iterative optimized training dataset methodology and two parameters KDE prediction algorithm, our proposed automated planning strategy improves the accuracy of the DVH prediction. The automatically generated treatment plans using the dose derived from the predicted DVHs can achieve better dose sparing for some OARs without compromising other metrics of plan quality. The proposed new automated treatment planning solution can be used to efficiently evaluate and improve the quality and consistency of the treatment plans for intensity-modulated breast and rectal cancer radiation therapy. © 2017 American Association of Physicists in Medicine.
Kierkels, Roel G J; Wopken, Kim; Visser, Ruurd; Korevaar, Erik W; van der Schaaf, Arjen; Bijl, Hendrik P; Langendijk, Johannes A
2016-12-01
Radiotherapy of the head and neck is challenged by the relatively large number of organs-at-risk close to the tumor. Biologically-oriented objective functions (OF) could optimally distribute the dose among the organs-at-risk. We aimed to explore OFs based on multivariable normal tissue complication probability (NTCP) models for grade 2-4 dysphagia (DYS) and tube feeding dependence (TFD). One hundred head and neck cancer patients were studied. Additional to the clinical plan, two more plans (an OF DYS and OF TFD -plan) were optimized per patient. The NTCP models included up to four dose-volume parameters and other non-dosimetric factors. A fully automatic plan optimization framework was used to optimize the OF NTCP -based plans. All OF NTCP -based plans were reviewed and classified as clinically acceptable. On average, the Δdose and ΔNTCP were small comparing the OF DYS -plan, OF TFD -plan, and clinical plan. For 5% of patients NTCP TFD reduced >5% using OF TFD -based planning compared to the OF DYS -plans. Plan optimization using NTCP DYS - and NTCP TFD -based objective functions resulted in clinically acceptable plans. For patients with considerable risk factors of TFD, the OF TFD steered the optimizer to dose distributions which directly led to slightly lower predicted NTCP TFD values as compared to the other studied plans. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A fast optimization approach for treatment planning of volumetric modulated arc therapy.
Yan, Hui; Dai, Jian-Rong; Li, Ye-Xiong
2018-05-30
Volumetric modulated arc therapy (VMAT) is widely used in clinical practice. It not only significantly reduces treatment time, but also produces high-quality treatment plans. Current optimization approaches heavily rely on stochastic algorithms which are time-consuming and less repeatable. In this study, a novel approach is proposed to provide a high-efficient optimization algorithm for VMAT treatment planning. A progressive sampling strategy is employed for beam arrangement of VMAT planning. The initial beams with equal-space are added to the plan in a coarse sampling resolution. Fluence-map optimization and leaf-sequencing are performed for these beams. Then, the coefficients of fluence-maps optimization algorithm are adjusted according to the known fluence maps of these beams. In the next round the sampling resolution is doubled and more beams are added. This process continues until the total number of beams arrived. The performance of VMAT optimization algorithm was evaluated using three clinical cases and compared to those of a commercial planning system. The dosimetric quality of VMAT plans is equal to or better than the corresponding IMRT plans for three clinical cases. The maximum dose to critical organs is reduced considerably for VMAT plans comparing to those of IMRT plans, especially in the head and neck case. The total number of segments and monitor units are reduced for VMAT plans. For three clinical cases, VMAT optimization takes < 5 min accomplished using proposed approach and is 3-4 times less than that of the commercial system. The proposed VMAT optimization algorithm is able to produce high-quality VMAT plans efficiently and consistently. It presents a new way to accelerate current optimization process of VMAT planning.
2017-03-01
RECRUITING WITH THE NEW PLANNED RESOURCE OPTIMIZATION MODEL WITH EXPERIMENTAL DESIGN (PROM-WED) by Allison R. Hogarth March 2017 Thesis...with the New Planned Resource Optimization Model With Experimental Design (PROM-WED) 5. FUNDING NUMBERS 6. AUTHOR(S) Allison R. Hogarth 7. PERFORMING...has historically used a non -linear optimization model, the Planned Resource Optimization (PRO) model, to help inform decisions on the allocation of
Ménigot, Sébastien; Girault, Jean-Marc
2013-01-01
Ultrasound contrast imaging has provided more accurate medical diagnoses thanks to the development of innovating modalities like the pulse inversion imaging. However, this latter modality that improves the contrast-to-tissue ratio (CTR) is not optimal, since the frequency is manually chosen jointly with the probe. However, an optimal choice of this command is possible, but it requires precise information about the transducer and the medium which can be experimentally difficult to obtain, even inaccessible. It turns out that the optimization can become more complex by taking into account the kind of generators, since the generators of electrical signals in a conventional ultrasound scanner can be unipolar, bipolar, or tripolar. Our aim was to seek the ternary command which maximized the CTR. By combining a genetic algorithm and a closed loop, the system automatically proposed the optimal ternary command. In simulation, the gain compared with the usual ternary signal could reach about 3.9 dB. Another interesting finding was that, in contrast to what is generally accepted, the optimal command was not a fixed-frequency signal but had harmonic components.
Inverse design of bulk morphologies in block copolymers using particle swarm optimization
NASA Astrophysics Data System (ADS)
Khadilkar, Mihir; Delaney, Kris; Fredrickson, Glenn
Multiblock polymers are a versatile platform for creating a large range of nanostructured materials with novel morphologies and properties. However, achieving desired structures or property combinations is difficult due to a vast design space comprised of parameters including monomer species, block sequence, block molecular weights and dispersity, copolymer architecture, and binary interaction parameters. Navigating through such vast design spaces to achieve an optimal formulation for a target structure or property set requires an efficient global optimization tool wrapped around a forward simulation technique such as self-consistent field theory (SCFT). We report on such an inverse design strategy utilizing particle swarm optimization (PSO) as the global optimizer and SCFT as the forward prediction engine. To avoid metastable states in forward prediction, we utilize pseudo-spectral variable cell SCFT initiated from a library of defect free seeds of known block copolymer morphologies. We demonstrate that our approach allows for robust identification of block copolymers and copolymer alloys that self-assemble into a targeted structure, optimizing parameters such as block fractions, blend fractions, and Flory chi parameters.
Un, M Kerem; Kaghazchi, Hamed
2018-01-01
When a signal is initiated in the nerve, it is transmitted along each nerve fiber via an action potential (called single fiber action potential (SFAP)) which travels with a velocity that is related with the diameter of the fiber. The additive superposition of SFAPs constitutes the compound action potential (CAP) of the nerve. The fiber diameter distribution (FDD) in the nerve can be computed from the CAP data by solving an inverse problem. This is usually achieved by dividing the fibers into a finite number of diameter groups and solve a corresponding linear system to optimize FDD. However, number of fibers in a nerve can be measured sometimes in thousands and it is possible to assume a continuous distribution for the fiber diameters which leads to a gradient optimization problem. In this paper, we have evaluated this continuous approach to the solution of the inverse problem. We have utilized an analytical function for SFAP and an assumed a polynomial form for FDD. The inverse problem involves the optimization of polynomial coefficients to obtain the best estimate for the FDD. We have observed that an eighth order polynomial for FDD can capture both unimodal and bimodal fiber distributions present in vivo, even in case of noisy CAP data. The assumed FDD distribution regularizes the ill-conditioned inverse problem and produces good results.
Zeng, Chuan; Giantsoudi, Drosoula; Grassberger, Clemens; Goldberg, Saveli; Niemierko, Andrzej; Paganetti, Harald; Efstathiou, Jason A.; Trofimov, Alexei
2013-01-01
Purpose: Biological effect of radiation can be enhanced with hypofractionation, localized dose escalation, and, in particle therapy, with optimized distribution of linear energy transfer (LET). The authors describe a method to construct inhomogeneous fractional dose (IFD) distributions, and evaluate the potential gain in the therapeutic effect from their delivery in proton therapy delivered by pencil beam scanning. Methods: For 13 cases of prostate cancer, the authors considered hypofractionated courses of 60 Gy delivered in 20 fractions. (All doses denoted in Gy include the proton's mean relative biological effectiveness (RBE) of 1.1.) Two types of plans were optimized using two opposed lateral beams to deliver a uniform dose of 3 Gy per fraction to the target by scanning: (1) in conventional full-target plans (FTP), each beam irradiated the entire gland, (2) in split-target plans (STP), beams irradiated only the respective proximal hemispheres (prostate split sagittally). Inverse planning yielded intensity maps, in which discrete position control points of the scanned beam (spots) were assigned optimized intensity values. FTP plans preferentially required a higher intensity of spots in the distal part of the target, while STP, by design, employed proximal spots. To evaluate the utility of IFD delivery, IFD plans were generated by rearranging the spot intensities from FTP or STP intensity maps, separately as well as combined using a variety of mixing weights. IFD courses were designed so that, in alternating fractions, one of the hemispheres of the prostate would receive a dose boost and the other receive a lower dose, while the total physical dose from the IFD course was roughly uniform across the prostate. IFD plans were normalized so that the equivalent uniform dose (EUD) of rectum and bladder did not increase, compared to the baseline FTP plan, which irradiated the prostate uniformly in every fraction. An EUD-based model was then applied to estimate tumor control probability (TCP) and normal tissue complication probability (NTCP). To assess potential local RBE variations, LET distributions were calculated with Monte Carlo, and compared for different plans. The results were assessed in terms of their sensitivity to uncertainties in model parameters and delivery. Results: IFD courses included equal number of fractions boosting either hemisphere, thus, the combined physical dose was close to uniform throughout the prostate. However, for the entire course, the prostate EUD in IFD was higher than in conventional FTP by up to 14%, corresponding to the estimated increase in TCP to 96% from 88%. The extent of gain depended on the mixing factor, i.e., relative weights used to combine FTP and STP spot weights. Increased weighting of STP typically yielded a higher target EUD, but also led to increased sensitivity of dose to variations in the proton's range. Rectal and bladder EUD were same or lower (per normalization), and the NTCP for both remained below 1%. The LET distributions in IFD also depended strongly on the mixing weights: plans using higher weight of STP spots yielded higher LET, indicating a potentially higher local RBE. Conclusions: In proton therapy delivered by pencil beam scanning, improved therapeutic outcome can potentially be expected with delivery of IFD distributions, while administering the prescribed quasi-uniform dose to the target over the entire course. The biological effectiveness of IFD may be further enhanced by optimizing the LET distributions. IFD distributions are characterized by a dose gradient located in proximity of the prostate's midplane, thus, the fidelity of delivery would depend crucially on the precision with which the proton range could be controlled. PMID:23635256
Zeng, Chuan; Giantsoudi, Drosoula; Grassberger, Clemens; Goldberg, Saveli; Niemierko, Andrzej; Paganetti, Harald; Efstathiou, Jason A; Trofimov, Alexei
2013-05-01
Biological effect of radiation can be enhanced with hypofractionation, localized dose escalation, and, in particle therapy, with optimized distribution of linear energy transfer (LET). The authors describe a method to construct inhomogeneous fractional dose (IFD) distributions, and evaluate the potential gain in the therapeutic effect from their delivery in proton therapy delivered by pencil beam scanning. For 13 cases of prostate cancer, the authors considered hypofractionated courses of 60 Gy delivered in 20 fractions. (All doses denoted in Gy include the proton's mean relative biological effectiveness (RBE) of 1.1.) Two types of plans were optimized using two opposed lateral beams to deliver a uniform dose of 3 Gy per fraction to the target by scanning: (1) in conventional full-target plans (FTP), each beam irradiated the entire gland, (2) in split-target plans (STP), beams irradiated only the respective proximal hemispheres (prostate split sagittally). Inverse planning yielded intensity maps, in which discrete position control points of the scanned beam (spots) were assigned optimized intensity values. FTP plans preferentially required a higher intensity of spots in the distal part of the target, while STP, by design, employed proximal spots. To evaluate the utility of IFD delivery, IFD plans were generated by rearranging the spot intensities from FTP or STP intensity maps, separately as well as combined using a variety of mixing weights. IFD courses were designed so that, in alternating fractions, one of the hemispheres of the prostate would receive a dose boost and the other receive a lower dose, while the total physical dose from the IFD course was roughly uniform across the prostate. IFD plans were normalized so that the equivalent uniform dose (EUD) of rectum and bladder did not increase, compared to the baseline FTP plan, which irradiated the prostate uniformly in every fraction. An EUD-based model was then applied to estimate tumor control probability (TCP) and normal tissue complication probability (NTCP). To assess potential local RBE variations, LET distributions were calculated with Monte Carlo, and compared for different plans. The results were assessed in terms of their sensitivity to uncertainties in model parameters and delivery. IFD courses included equal number of fractions boosting either hemisphere, thus, the combined physical dose was close to uniform throughout the prostate. However, for the entire course, the prostate EUD in IFD was higher than in conventional FTP by up to 14%, corresponding to the estimated increase in TCP to 96% from 88%. The extent of gain depended on the mixing factor, i.e., relative weights used to combine FTP and STP spot weights. Increased weighting of STP typically yielded a higher target EUD, but also led to increased sensitivity of dose to variations in the proton's range. Rectal and bladder EUD were same or lower (per normalization), and the NTCP for both remained below 1%. The LET distributions in IFD also depended strongly on the mixing weights: plans using higher weight of STP spots yielded higher LET, indicating a potentially higher local RBE. In proton therapy delivered by pencil beam scanning, improved therapeutic outcome can potentially be expected with delivery of IFD distributions, while administering the prescribed quasi-uniform dose to the target over the entire course. The biological effectiveness of IFD may be further enhanced by optimizing the LET distributions. IFD distributions are characterized by a dose gradient located in proximity of the prostate's midplane, thus, the fidelity of delivery would depend crucially on the precision with which the proton range could be controlled.
Dong, Peng; Liu, Hongcheng; Xing, Lei
2018-06-04
An important yet challenging problem in LINAC-based rotational arc radiation therapy is the design of beam trajectory, which requires simultaneous consideration of delivery efficiency and final dose distribution. In this work, we propose a novel trajectory selection strategy by developing a Monte Carlo tree search (MCTS) algorithm during the beam trajectory selection process. Methods: To search through the vast number of possible trajectories, MCTS algorithm was implemented. In this approach, a candidate trajectory is explored by starting from a leaf node and sequentially examining the next level of linked nodes with consideration of geometric and physical constraints. The maximum Upper Confidence Bounds for Trees, which is a function of average objective function value and the number of times the node under testing has been visited, was employed to intelligently select the trajectory. For each candidate trajectory, we run an inverse fluence map optimization with an infinity norm regularization. The ranking of the plan as measured by the corresponding objective function value was then fed back to update the statistics of the nodes on the trajectory. The method was evaluated with a chest wall and a brain case, and the results were compared with the coplanar and noncoplanar 4pi beam configurations. Results: For both clinical cases, the MCTS method found effective and easy-to-deliver trajectories within an hour. As compared with the coplanar plans, it offers much better sparing of the OARs while maintaining the PTV coverage. The quality of the MCTS-generated plan is found to be comparable to the 4pi plans. Conclusion: AI based on MCTS is valuable to facilitate the design of beam trajectory and paves the way for future clinical use of non-coplanar treatment delivery. . © 2018 Institute of Physics and Engineering in Medicine.
Noncoplanar VMAT for nasopharyngeal tumors: Plan quality versus treatment time.
Wild, Esther; Bangert, Mark; Nill, Simeon; Oelfke, Uwe
2015-05-01
The authors investigated the potential of optimized noncoplanar irradiation trajectories for volumetric modulated arc therapy (VMAT) treatments of nasopharyngeal patients and studied the trade-off between treatment plan quality and delivery time in radiation therapy. For three nasopharyngeal patients, the authors generated treatment plans for nine different delivery scenarios using dedicated optimization methods. They compared these scenarios according to dose characteristics, number of beam directions, and estimated delivery times. In particular, the authors generated the following treatment plans: (1) a 4π plan, which is a not sequenced, fluence optimized plan that uses beam directions from approximately 1400 noncoplanar directions and marks a theoretical upper limit of the treatment plan quality, (2) a coplanar 2π plan with 72 coplanar beam directions as pendant to the noncoplanar 4π plan, (3) a coplanar VMAT plan, (4) a coplanar step and shoot (SnS) plan, (5) a beam angle optimized (BAO) coplanar SnS IMRT plan, (6) a noncoplanar BAO SnS plan, (7) a VMAT plan with rotated treatment couch, (8) a noncoplanar VMAT plan with an optimized great circle around the patient, and (9) a noncoplanar BAO VMAT plan with an arbitrary trajectory around the patient. VMAT using optimized noncoplanar irradiation trajectories reduced the mean and maximum doses in organs at risk compared to coplanar VMAT plans by 19% on average while the target coverage remains constant. A coplanar BAO SnS plan was superior to coplanar SnS or VMAT; however, noncoplanar plans like a noncoplanar BAO SnS plan or noncoplanar VMAT yielded a better plan quality than the best coplanar 2π plan. The treatment plan quality of VMAT plans depended on the length of the trajectory. The delivery times of noncoplanar VMAT plans were estimated to be 6.5 min in average; 1.6 min longer than a coplanar plan but on average 2.8 min faster than a noncoplanar SnS plan with comparable treatment plan quality. The authors' study reconfirms the dosimetric benefits of noncoplanar irradiation of nasopharyngeal tumors. Both SnS using optimized noncoplanar beam ensembles and VMAT using an optimized, arbitrary, noncoplanar trajectory enabled dose reductions in organs at risk compared to coplanar SnS and VMAT. Using great circles or simple couch rotations to implement noncoplanar VMAT, however, was not sufficient to yield meaningful improvements in treatment plan quality. The authors estimate that noncoplanar VMAT using arbitrary optimized irradiation trajectories comes at an increased delivery time compared to coplanar VMAT yet at a decreased delivery time compared to noncoplanar SnS IMRT.
Preview-Based Stable-Inversion for Output Tracking
NASA Technical Reports Server (NTRS)
Zou, Qing-Ze; Devasia, Santosh
1999-01-01
Stable Inversion techniques can be used to achieve high-accuracy output tracking. However, for nonminimum phase systems, the inverse is non-causal - hence the inverse has to be pre-computed using a pre-specified desired-output trajectory. This requirement for pre-specification of the desired output restricts the use of inversion-based approaches to trajectory planning problems (for nonminimum phase systems). In the present article, it is shown that preview information of the desired output can be used to achieve online inversion-based output tracking of linear systems. The amount of preview-time needed is quantified in terms of the tracking error and the internal dynamics of the system (zeros of the system). The methodology is applied to the online output tracking of a flexible structure and experimental results are presented.
Applying Wave (registered trademark) to Build an Air Force Community of Interest Shared Space
2007-08-01
Performance. It is essential that an inverse transform be defined for every transform, or else the query mediator must be smart enough to figure out how...to invert it. Without an inverse transform , if an incoming query constrains on the transformed attribute, the query mediator might generate a query...plan that is horribly inefficient. If you must code a custom transformation function, you must also code the inverse transform . Putting the
NASA Astrophysics Data System (ADS)
Kountouris, Panagiotis; Gerbig, Christoph; Rödenbeck, Christian; Karstens, Ute; Koch, Thomas F.; Heimann, Martin
2018-03-01
Optimized biogenic carbon fluxes for Europe were estimated from high-resolution regional-scale inversions, utilizing atmospheric CO2 measurements at 16 stations for the year 2007. Additional sensitivity tests with different data-driven error structures were performed. As the atmospheric network is rather sparse and consequently contains large spatial gaps, we use a priori biospheric fluxes to further constrain the inversions. The biospheric fluxes were simulated by the Vegetation Photosynthesis and Respiration Model (VPRM) at a resolution of 0.1° and optimized against eddy covariance data. Overall we estimate an a priori uncertainty of 0.54 GtC yr-1 related to the poor spatial representation between the biospheric model and the ecosystem sites. The sink estimated from the atmospheric inversions for the area of Europe (as represented in the model domain) ranges between 0.23 and 0.38 GtC yr-1 (0.39 and 0.71 GtC yr-1 up-scaled to geographical Europe). This is within the range of posterior flux uncertainty estimates of previous studies using ground-based observations.
Khan, Muhammad Isa; Jiang, Runqing; Kiciak, Alexander; ur Rehman, Jalil; Afzal, Muhammad; Chow, James C. L.
2016-01-01
This study reviewed prostate volumetric-modulated arc therapy (VMAT) plans with intensity-modulated radiotherapy (IMRT) plans after prostate IMRT technique was replaced by VMAT in an institution. Characterizations of dosimetry and radiobiological variation in prostate were determined based on treatment plans of 40 prostate IMRT patients (planning target volume = 77.8–335 cm3) and 50 VMAT patients (planning target volume = 120–351 cm3) treated before and after 2013, respectively. Both IMRT and VMAT plans used the same dose-volume criteria in the inverse planning optimization. Dose-volume histogram, mean doses of target and normal tissues (rectum, bladder and femoral heads), dose-volume points (D99% of planning target volume; D30%, D50%, V30 Gy and V35 Gy of rectum and bladder; D5%, V14 Gy, V22 Gy of femoral heads), conformity index (CI), homogeneity index (HI), gradient index (GI), prostate tumor control probability (TCP), and rectal normal tissue complication probability (NTCP) based on the Lyman-Burman-Kutcher algorithm were calculated for each IMRT and VMAT plan. From our results, VMAT plan was found better due to its higher (1.05%) CI, lower (0.83%) HI and (0.75%) GI than IMRT. Comparing doses in normal tissues between IMRT and VMAT, it was found that IMRT mostly delivered higher doses of about 1.05% to the normal tissues than VMAT. Prostate TCP and rectal NTCP were found increased (1%) for VMAT than IMRT. It is seen that VMAT technique can decrease the dose-volume evaluation criteria for the normal tissues. Based on our dosimetric and radiobiological results in treatment plans, it is concluded that our VMAT implementation could produce comparable or slightly better target coverage and normal tissue sparing with a faster treatment time in prostate radiotherapy. PMID:27651562
Eversion-Inversion Labral Repair and Reconstruction Technique for Optimal Suction Seal.
Moreira, Brett; Pascual-Garrido, Cecilia; Chadayamurri, Vivek; Mei-Dan, Omer
2015-12-01
Labral tears are a significant cause of hip pain and are currently the most common indication for hip arthroscopy. Compared with labral debridement, labral repair has significantly better outcomes in terms of both daily activities and athletic pursuits in the setting of femoral acetabular impingement. The classic techniques described in the literature for labral repair all use loop or pass-through intrasubstance labral sutures to achieve a functional hip seal. This hip seal is important for hip stability and optimal joint biomechanics, as well as in the prevention of long-term osteoarthritis. We describe a novel eversion-inversion intrasubstance suturing technique for labral repair and reconstruction that can assist in restoration of the native labrum position by re-creating an optimal seal around the femoral head.
Multigrid-based reconstruction algorithm for quantitative photoacoustic tomography
Li, Shengfu; Montcel, Bruno; Yuan, Zhen; Liu, Wanyu; Vray, Didier
2015-01-01
This paper proposes a multigrid inversion framework for quantitative photoacoustic tomography reconstruction. The forward model of optical fluence distribution and the inverse problem are solved at multiple resolutions. A fixed-point iteration scheme is formulated for each resolution and used as a cost function. The simulated and experimental results for quantitative photoacoustic tomography reconstruction show that the proposed multigrid inversion can dramatically reduce the required number of iterations for the optimization process without loss of reliability in the results. PMID:26203371
Computationally efficient control allocation
NASA Technical Reports Server (NTRS)
Durham, Wayne (Inventor)
2001-01-01
A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.
Optimal updating magnitude in adaptive flat-distribution sampling
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Drake, Justin A.; Ma, Jianpeng; Pettitt, B. Montgomery
2017-11-01
We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.
Optimal updating magnitude in adaptive flat-distribution sampling.
Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery
2017-11-07
We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.
Robust optimization in lung treatment plans accounting for geometric uncertainty.
Zhang, Xin; Rong, Yi; Morrill, Steven; Fang, Jian; Narayanasamy, Ganesh; Galhardo, Edvaldo; Maraboyina, Sanjay; Croft, Christopher; Xia, Fen; Penagaricano, Jose
2018-05-01
Robust optimization generates scenario-based plans by a minimax optimization method to find optimal scenario for the trade-off between target coverage robustness and organ-at-risk (OAR) sparing. In this study, 20 lung cancer patients with tumors located at various anatomical regions within the lungs were selected and robust optimization photon treatment plans including intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) plans were generated. The plan robustness was analyzed using perturbed doses with setup error boundary of ±3 mm in anterior/posterior (AP), ±3 mm in left/right (LR), and ±5 mm in inferior/superior (IS) directions from isocenter. Perturbed doses for D 99 , D 98 , and D 95 were computed from six shifted isocenter plans to evaluate plan robustness. Dosimetric study was performed to compare the internal target volume-based robust optimization plans (ITV-IMRT and ITV-VMAT) and conventional PTV margin-based plans (PTV-IMRT and PTV-VMAT). The dosimetric comparison parameters were: ITV target mean dose (D mean ), R 95 (D 95 /D prescription ), Paddick's conformity index (CI), homogeneity index (HI), monitor unit (MU), and OAR doses including lung (D mean , V 20 Gy and V 15 Gy ), chest wall, heart, esophagus, and maximum cord doses. A comparison of optimization results showed the robust optimization plan had better ITV dose coverage, better CI, worse HI, and lower OAR doses than conventional PTV margin-based plans. Plan robustness evaluation showed that the perturbed doses of D 99 , D 98 , and D 95 were all satisfied at least 99% of the ITV to received 95% of prescription doses. It was also observed that PTV margin-based plans had higher MU than robust optimization plans. The results also showed robust optimization can generate plans that offer increased OAR sparing, especially for normal lungs and OARs near or abutting the target. Weak correlation was found between normal lung dose and target size, and no other correlation was observed in this study. © 2018 University of Arkansas for Medical Sciences. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
A two‐point scheme for optimal breast IMRT treatment planning
2013-01-01
We propose an approach to determining optimal beam weights in breast/chest wall IMRT treatment plans. The goal is to decrease breathing effect and to maximize skin dose if the skin is included in the target or, otherwise, to minimize the skin dose. Two points in the target are utilized to calculate the optimal weights. The optimal plan (i.e., the plan with optimal beam weights) consists of high energy unblocked beams, low energy unblocked beams, and IMRT beams. Six breast and five chest wall cases were retrospectively planned with this scheme in Eclipse, including one breast case where CTV was contoured by the physician. Compared with 3D CRT plans composed of unblocked and field‐in‐field beams, the optimal plans demonstrated comparable or better dose uniformity, homogeneity, and conformity to the target, especially at beam junction when supraclavicular nodes are involved. Compared with nonoptimal plans (i.e., plans with nonoptimized weights), the optimal plans had better dose distributions at shallow depths close to the skin, especially in cases where breathing effect was taken into account. This was verified with experiments using a MapCHECK device attached to a motion simulation table (to mimic motion caused by breathing). PACS number: 87.55 de PMID:24257291
Current Concepts in Labral Repair and Refixation: Anatomical Approach to Labral Management.
Kollmorgen, Robert; Mather, Richard
Arthroscopic labral repair and refixation have garnered much attention over the past several years. Restoration of suction seal and native labral function has been an evolving focus for achieving excellent results in hip preservation surgery. Authors have reported using several labral management techniques: débridement, labralization, looped suture fixation, base stitch fixation, inversion-eversion, and reconstruction. The optimal technique is yet to be determined. Absolute indications for labral repair are symptomatic intra-articular pain, joint space >2 mm, and failed conservative management. Extreme attention is given to identifying and addressing the cause, whether it be acute or repetitive trauma, instability, or femoroacetabular impingement. In this article, we discuss indications for labral repair; describe Dr. Mather's preoperative planning, labral repair technique, and postoperative care; and review published outcomes and future trends in labral repair.
Novel Scalable 3-D MT Inverse Solver
NASA Astrophysics Data System (ADS)
Kuvshinov, A. V.; Kruglyakov, M.; Geraskin, A.
2016-12-01
We present a new, robust and fast, three-dimensional (3-D) magnetotelluric (MT) inverse solver. As a forward modelling engine a highly-scalable solver extrEMe [1] is used. The (regularized) inversion is based on an iterative gradient-type optimization (quasi-Newton method) and exploits adjoint sources approach for fast calculation of the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT (single-site and/or inter-site) responses, and supports massive parallelization. Different parallelization strategies implemented in the code allow for optimal usage of available computational resources for a given problem set up. To parameterize an inverse domain a mask approach is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to high-performance clusters demonstrate practically linear scalability of the code up to thousands of nodes. 1. Kruglyakov, M., A. Geraskin, A. Kuvshinov, 2016. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method, Computers and Geosciences, in press.
Stochastic seismic inversion based on an improved local gradual deformation method
NASA Astrophysics Data System (ADS)
Yang, Xiuwei; Zhu, Peimin
2017-12-01
A new stochastic seismic inversion method based on the local gradual deformation method is proposed, which can incorporate seismic data, well data, geology and their spatial correlations into the inversion process. Geological information, such as sedimentary facies and structures, could provide significant a priori information to constrain an inversion and arrive at reasonable solutions. The local a priori conditional cumulative distributions at each node of model to be inverted are first established by indicator cokriging, which integrates well data as hard data and geological information as soft data. Probability field simulation is used to simulate different realizations consistent with the spatial correlations and local conditional cumulative distributions. The corresponding probability field is generated by the fast Fourier transform moving average method. Then, optimization is performed to match the seismic data via an improved local gradual deformation method. Two improved strategies are proposed to be suitable for seismic inversion. The first strategy is that we select and update local areas of bad fitting between synthetic seismic data and real seismic data. The second one is that we divide each seismic trace into several parts and obtain the optimal parameters for each part individually. The applications to a synthetic example and a real case study demonstrate that our approach can effectively find fine-scale acoustic impedance models and provide uncertainty estimations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Breedveld, Sebastiaan; Storchi, Pascal R. M.; Voet, Peter W. J.
2012-02-15
Purpose: To introduce iCycle, a novel algorithm for integrated, multicriterial optimization of beam angles, and intensity modulated radiotherapy (IMRT) profiles. Methods: A multicriterial plan optimization with iCycle is based on a prescription called wish-list, containing hard constraints and objectives with ascribed priorities. Priorities are ordinal parameters used for relative importance ranking of the objectives. The higher an objective priority is, the higher the probability that the corresponding objective will be met. Beam directions are selected from an input set of candidate directions. Input sets can be restricted, e.g., to allow only generation of coplanar plans, or to avoid collisions betweenmore » patient/couch and the gantry in a noncoplanar setup. Obtaining clinically feasible calculation times was an important design criterium for development of iCycle. This could be realized by sequentially adding beams to the treatment plan in an iterative procedure. Each iteration loop starts with selection of the optimal direction to be added. Then, a Pareto-optimal IMRT plan is generated for the (fixed) beam setup that includes all so far selected directions, using a previously published algorithm for multicriterial optimization of fluence profiles for a fixed beam arrangement Breedveld et al.[Phys. Med. Biol. 54, 7199-7209 (2009)]. To select the next direction, each not yet selected candidate direction is temporarily added to the plan and an optimization problem, derived from the Lagrangian obtained from the just performed optimization for establishing the Pareto-optimal plan, is solved. For each patient, a single one-beam, two-beam, three-beam, etc. Pareto-optimal plan is generated until addition of beams does no longer result in significant plan quality improvement. Plan generation with iCycle is fully automated. Results: Performance and characteristics of iCycle are demonstrated by generating plans for a maxillary sinus case, a cervical cancer patient, and a liver patient treated with SBRT. Plans generated with beam angle optimization did better meet the clinical goals than equiangular or manually selected configurations. For the maxillary sinus and liver cases, significant improvements for noncoplanar setups were seen. The cervix case showed that also in IMRT with coplanar setups, beam angle optimization with iCycle may improve plan quality. Computation times for coplanar plans were around 1-2 h and for noncoplanar plans 4-7 h, depending on the number of beams and the complexity of the site. Conclusions: Integrated beam angle and profile optimization with iCycle may result in significant improvements in treatment plan quality. Due to automation, the plan generation workload is minimal. Clinical application has started.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, X; Sun, T; Liu, T
2014-06-01
Purpose: To evaluate the dosimetric characteristics of intensity-modulated radiotherapy (IMRT) treatment plan with beam angle optimization. Methods: Ten post-operation patients with cervical cancer were included in this analysis. Two IMRT plans using seven beams were designed in each patient. A standard coplanar equi-space beam angles were used in the first plan (plan 1), whereas the selection of beam angle was optimized by beam angle optimization algorithm in Varian Eclipse treatment planning system for the same number of beams in the second plan (plan 2). Two plans were designed for each patient with the same dose-volume constraints and prescription dose. Allmore » plans were normalized to the mean dose to PTV. The dose distribution in the target, the dose to the organs at risk and total MU were compared. Results: For conformity and homogeneity in PTV, no statistically differences were observed in the two plans. For the mean dose in bladder, plan 2 were significantly lower than plan 1(p<0.05). No statistically significant differences were observed between two plans for the mean doses in rectum, left and right femur heads. Compared with plan1, the average monitor units reduced 16% in plan 2. Conclusion: The IMRT plan based on beam angle optimization for cervical cancer could reduce the dose delivered to bladder and also reduce MU. Therefore there were some dosimetric advantages in the IMRT plan with beam angle optimization for cervical cancer.« less
Force sensing using 3D displacement measurements in linear elastic bodies
NASA Astrophysics Data System (ADS)
Feng, Xinzeng; Hui, Chung-Yuen
2016-07-01
In cell traction microscopy, the mechanical forces exerted by a cell on its environment is usually determined from experimentally measured displacement by solving an inverse problem in elasticity. In this paper, an innovative numerical method is proposed which finds the "optimal" traction to the inverse problem. When sufficient regularization is applied, we demonstrate that the proposed method significantly improves the widely used approach using Green's functions. Motivated by real cell experiments, the equilibrium condition of a slowly migrating cell is imposed as a set of equality constraints on the unknown traction. Our validation benchmarks demonstrate that the numeric solution to the constrained inverse problem well recovers the actual traction when the optimal regularization parameter is used. The proposed method can thus be applied to study general force sensing problems, which utilize displacement measurements to sense inaccessible forces in linear elastic bodies with a priori constraints.
NASA Astrophysics Data System (ADS)
Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian
2017-07-01
Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.
Inverse problem of the vibrational band gap of periodically supported beam
NASA Astrophysics Data System (ADS)
Shi, Xiaona; Shu, Haisheng; Dong, Fuzhen; Zhao, Lei
2017-04-01
The researches of periodic structures have a long history with the main contents confined in the field of forward problem. In this paper, the inverse problem is considered and an overall frame is proposed which includes two main stages, i.e., the band gap criterion and its optimization. As a preliminary investigation, the inverse problem of the flexural vibrational band gap of a periodically supported beam is analyzed. According to existing knowledge of its forward problem, the band gap criterion is given in implicit form. Then, two cases with three independent parameters, namely the double supported case and the triple one, are studied in detail and the explicit expressions of the feasible domain are constructed by numerical fitting. Finally, the parameter optimization of the double supported case with three variables is conducted using genetic algorithm aiming for the best mean attenuation within specified frequency band.
NASA Astrophysics Data System (ADS)
Mushtak, V. C.; Williams, E.
2010-12-01
The spatial-temporal behavior of world-wide lightning activity can be effectively used as an indicator of various geophysical processes, the global climate change being of a special interest among them. Since it has been reliably established that the lightning activity presents a major source of natural electromagnetic background in the Schumann resonance (SR) frequency range (5 to 40 Hz), SR measurements provide a continuous flow of information about this globally distributed source, thus forming an informative basis for monitoring its behavior via an inversion of observations into the source’s properties. To have such an inversion procedure effective, there is a series of prerequisites to comply with when planning and realizing it: (a) a proper choice of observable parameters to be used in the inversion; (b) a proper choice of a forward propagation model that would be accurate enough to take into consideration the major propagation effects occurring between a source and observer; (c) a proper choice of a method for inverting the sensitivity matrix. While the prerequisite (a) is quite naturally fulfilled by considering the SR resonance characteristics (modal frequencies, intensities, and quality factors), the compliance with prerequisites (b) and (c) has benefitted greatly from earlier seminal work on geophysical inversion by T.R. Madden. Since it has been found that the electrodynamic non-uniformities of the Earth-ionosphere waveguide, primarily the day/night, play an essential role in low-frequency propagation, use has been made of theory for the two-dimensional telegraph equation (TDTE; Kirillov, 2002) developed on the basis of the innovative suggestion by Madden and Thompson (1965) to consider the waveguide, both physically and mathematically, by analogy with a two-dimensional transmission line. Because of the iterative nature of the inversion procedure and the complicated, non-analytical character of the propagation theory, a special, fast-running TDTE forward algorithm has been developed for repeated numerous calculations of the sensitivity matrix. The theory for the inverse boundary value problem from Madden (1972) allows not only to correctly invert the sensitivity matrix, especially when the latter is ill-defined, but also to determine a priori the optimal observational design. The workability of the developed approaches and techniques is illustrated by estimating and processing observations from a network of SR stations located in Europe (Sopron, Hungary; Belsk, Poland), Asia (Shilong, India; Moshiri, Japan), North America (Rhode Island, USA), and Antarctica (Syowa). The spatial dynamics of major lightning “chimneys” determined via the inversion procedure had been found in a good agreement with general geophysical knowledge even when only the modal frequencies had been used. The incorporation of modal intensities greatly improves the agreement, while the Q-factors have been found of a lesser informative value. The preliminary results form a promising basis for achieving the ultimate objective of this study, The authors are deeply grateful to all the participants of the project who have generously, and on a gratis basis, invested their time and effort into preparing and providing the SR data.
Giżyńska, Marta K.; Kukołowicz, Paweł F.; Kordowski, Paweł
2014-01-01
Aim The aim of this work is to present a method of beam weight and wedge angle optimization for patients with prostate cancer. Background 3D-CRT is usually realized with forward planning based on a trial and error method. Several authors have published a few methods of beam weight optimization applicable to the 3D-CRT. Still, none on these methods is in common use. Materials and methods Optimization is based on the assumption that the best plan is achieved if dose gradient at ICRU point is equal to zero. Our optimization algorithm requires beam quality index, depth of maximum dose, profiles of wedged fields and maximum dose to femoral heads. The method was tested for 10 patients with prostate cancer, treated with the 3-field technique. Optimized plans were compared with plans prepared by 12 experienced planners. Dose standard deviation in target volume, and minimum and maximum doses were analyzed. Results The quality of plans obtained with the proposed optimization algorithms was comparable to that prepared by experienced planners. Mean difference in target dose standard deviation was 0.1% in favor of the plans prepared by planners for optimization of beam weights and wedge angles. Introducing a correction factor for patient body outline for dose gradient at ICRU point improved dose distribution homogeneity. On average, a 0.1% lower standard deviation was achieved with the optimization algorithm. No significant difference in mean dose–volume histogram for the rectum was observed. Conclusions Optimization shortens very much time planning. The average planning time was 5 min and less than a minute for forward and computer optimization, respectively. PMID:25337411
Miura, Hideharu; Ozawa, Shuichi; Nagata, Yasushi
2017-09-01
This study investigated position dependence in planning target volume (PTV)-based and robust optimization plans using full-arc and partial-arc volumetric modulated arc therapy (VMAT). The gantry angles at the periphery, intermediate, and center CTV positions were 181°-180° (full-arc VMAT) and 181°-360° (partial-arc VMAT). A PTV-based optimization plan was defined by 5 mm margin expansion of the CTV to a PTV volume, on which the dose constraints were applied. The robust optimization plan consisted of a directly optimized dose to the CTV under a maximum-uncertainties setup of 5 mm. The prescription dose was normalized to the CTV D 99% (the minimum relative dose that covers 99% of the volume of the CTV) as an original plan. The isocenter was rigidly shifted at 1 mm intervals in the anterior-posterior (A-P), superior-inferior (S-I), and right-left (R-L) directions from the original position to the maximum-uncertainties setup of 5 mm in the original plan, yielding recalculated dose distributions. It was found that for the intermediate and center positions, the uncertainties in the D 99% doses to the CTV for all directions did not significantly differ when comparing the PTV-based and robust optimization plans (P > 0.05). For the periphery position, uncertainties in the D 99% doses to the CTV in the R-L direction for the robust optimization plan were found to be lower than those in the PTV-based optimization plan (P < 0.05). Our study demonstrated that a robust optimization plan's efficacy using partial-arc VMAT depends on the periphery CTV position. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Hongyu; Petra, Noemi; Stadler, Georg
We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection–diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations andmore » model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov–Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems – i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian – we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. Here, we show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.« less
Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model
Zhu, Hongyu; Petra, Noemi; Stadler, Georg; ...
2016-07-13
We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection–diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations andmore » model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov–Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems – i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian – we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. Here, we show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.« less
Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model
NASA Astrophysics Data System (ADS)
Zhu, Hongyu; Petra, Noemi; Stadler, Georg; Isaac, Tobin; Hughes, Thomas J. R.; Ghattas, Omar
2016-07-01
We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection-diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations and model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov-Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems - i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian - we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. We show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.
Hybrid inversions of CO2 fluxes at regional scale applied to network design
NASA Astrophysics Data System (ADS)
Kountouris, Panagiotis; Gerbig, Christoph; -Thomas Koch, Frank
2013-04-01
Long term observations of atmospheric greenhouse gas measuring stations, located at representative regions over the continent, improve our understanding of greenhouse gas sources and sinks. These mixing ratio measurements can be linked to surface fluxes by atmospheric transport inversions. Within the upcoming years new stations are to be deployed, which requires decision making tools with respect to the location and the density of the network. We are developing a method to assess potential greenhouse gas observing networks in terms of their ability to recover specific target quantities. As target quantities we use CO2 fluxes aggregated to specific spatial and temporal scales. We introduce a high resolution inverse modeling framework, which attempts to combine advantages from pixel based inversions with those of a carbon cycle data assimilation system (CCDAS). The hybrid inversion system consists of the Lagrangian transport model STILT, the diagnostic biosphere model VPRM and a Bayesian inversion scheme. We aim to retrieve the spatiotemporal distribution of net ecosystem exchange (NEE) at a high spatial resolution (10 km x 10 km) by inverting for spatially and temporally varying scaling factors for gross ecosystem exchange (GEE) and respiration (R) rather than solving for the fluxes themselves. Thus the state space includes parameters for controlling photosynthesis and respiration, but unlike in a CCDAS it allows for spatial and temporal variations, which can be expressed as NEE(x,y,t) = λG(x,y,t) GEE(x,y,t) + λR(x,y,t) R(x,y,t) . We apply spatially and temporally correlated uncertainties by using error covariance matrices with non-zero off-diagonal elements. Synthetic experiments will test our system and select the optimal a priori error covariance by using different spatial and temporal correlation lengths on the error statistics of the a priori covariance and comparing the optimized fluxes against the 'known truth'. As 'known truth' we use independent fluxes generated from a different biosphere model (BIOME-BGC). Initially we perform single-station inversions for Ochsenkopf tall tower located in Germany. Further expansion of the inversion framework to multiple stations and its application to network design will address the questions of how well a set of network stations can constrain a given target quantity, and whether there are objective criteria to select an optimal configuration for new stations that maximizes the uncertainty reduction.
Lee, G; Dinniwell, R; Liu, F F; Fyles, A; Han, K; Conrad, T; Levin, W; Marshall, A; Purdie, T G; Koch, C A
2016-12-01
Breast radiotherapy treatment is commonly managed by a multidisciplinary team to ensure optimal delivery of care. We sought a new model of care whereby a clinical specialist radiation therapist (CSRT) delineates the cavity target for whole breast radiotherapy treatment planning and the radiation oncologist validates the contour during final plan review. This study evaluated the radiation oncologist's acceptance of these contours and identified characteristics of cavities suitable for CSRT-directed contouring. Following specialised breast oncology education and training by the radiation oncologist, the CSRT prospectively delineated cavities in 30 tangential breast radiotherapy cases and consulted the radiation oncologist in 'complex' cases but directed 'non-complex' cases for treatment planning. Changes to CSRT contours were evaluated using the conformity index. Breast density, time since surgery and cavity location, size and visualisation score [CVS: range 1 (no visible cavity) to 5 (homogenous cavity)] were captured. Of the 30 CSRT delineated cavities contours, the CSRT directed 20 (66.7%) cases for planning without radiation oncology review; 19 were accepted (without changes) by the radiation oncologist upon final plan review and one was changed by the radiation oncologist (conformity index = 0.93) for boost treatment and did not affect the tangential treatment plan. Ten (33.3%) cases, all CVS ≤ 3, were reviewed with the radiation oncologist before planning (conformity index = 0.88 ± 0.12). CVS was inversely correlated with breast density and cavity size (P < 0.01). The CSRT delineated cavities appropriate for clinical radiotherapy treatment planning in women with well-visualised cavities, whereas 'complex' cases with dense breast parenchyma, CVS ≤ 3, and/or cases needing boost radiotherapy treatment required review with the radiation oncologist before planning. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, C; Hrycushko, B; Jiang, S
2014-06-01
Purpose: To compare the radiobiological effect on large tumors and surrounding normal tissues from single fraction SRS, multi-fractionated SRT, and multi-staged SRS treatment. Methods: An anthropomorphic head phantom with a centrally located large volume target (18.2 cm{sup 3}) was scanned using a 16 slice large bore CT simulator. Scans were imported to the Multiplan treatment planning system where a total prescription dose of 20Gy was used for a single, three staged and three fractionated treatment. Cyber Knife treatment plans were inversely optimized for the target volume to achieve at least 95% coverage of the prescription dose. For the multistage plan,more » the target was segmented into three subtargets having similar volume and shape. Staged plans for individual subtargets were generated based on a planning technique where the beam MUs of the original plan on the total target volume are changed by weighting the MUs based on projected beam lengths within each subtarget. Dose matrices for each plan were export in DICOM format and used to calculate equivalent dose distributions in 2Gy fractions using an alpha beta ratio of 10 for the target and 3 for normal tissue. Results: Singe fraction SRS, multi-stage plan and multi-fractionated SRT plans had an average 2Gy dose equivalent to the target of 62.89Gy, 37.91Gy and 33.68Gy, respectively. The normal tissue within 12Gy physical dose region had an average 2Gy dose equivalent of 29.55Gy, 16.08Gy and 13.93Gy, respectively. Conclusion: The single fraction SRS plan had the largest predicted biological effect for the target and the surrounding normal tissue. The multi-stage treatment provided for a more potent biologically effect on target compared to the multi-fraction SRT treatments with less biological normal tissue than single-fraction SRS treatment.« less
Optimism and Planning for Future Care Needs among Older Adults
Sörensen, Silvia; Hirsch, Jameson K.; Lyness, Jeffrey M.
2015-01-01
Aging is associated with an increase in need for assistance. Preparation for future care (PFC) is related to improved coping ability as well as better mental and physical health outcomes among older adults. We examined the association of optimism with components of PFC among older adults. We also explored race differences in the relationship between optimism and PFC. In Study 1, multiple regression showed that optimism was positively related to concrete planning. In Study 2, optimism was related to gathering information. An exploratory analysis combining the samples yielded a race interaction: For Whites higher optimism, but for Blacks lower optimism was associated with more planning. High optimism may be a barrier to future planning in certain social and cultural contexts. PMID:26045699
Quintessence Reissner Nordström Anti de Sitter Black Holes and Joule Thomson Effect
NASA Astrophysics Data System (ADS)
Ghaffarnejad, H.; Yaraie, E.; Farsam, M.
2018-06-01
In this work we investigate corrections of the quintessence regime of the dark energy on the Joule-Thomson (JT) effect of the Reissner Nordström anti de Sitter (RNAdS) black hole. The quintessence dark energy has equation of state as p q = ω ρ q in which -1<ω <- 1/3. Our calculations are restricted to ansatz: ω = - 1 (the cosmological constant regime) and ω =- 2/3 (quintessence dark energy). To study the JT expansion of the AdS gas under the constant black hole mass, we calculate inversion temperature T i of the quintessence RNAdS black hole where its cooling phase is changed to heating phase at a particular (inverse) pressure P i . Position of the inverse point { T i , P i } is determined by crossing the inverse curves with the corresponding Gibbons-Hawking temperature on the T-P plan. We determine position of the inverse point versus different numerical values of the mass M and the charge Q of the quintessence AdS RN black hole. The cooling-heating phase transition (JT effect) is happened for M > Q in which the causal singularity is still covered by the horizon. Our calculations show sensitivity of the inverse point { T i , P i } position on the T-P plan to existence of the quintessence dark energy just for large numerical values of the AdS RN black holes charge Q. In other words the quintessence dark energy dose not affect on position of the inverse point when the AdS RN black hole takes on small charges.
Noncoplanar VMAT for nasopharyngeal tumors: Plan quality versus treatment time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wild, Esther, E-mail: e.wild@dkfz.de; Bangert, Mark; Nill, Simeon
Purpose: The authors investigated the potential of optimized noncoplanar irradiation trajectories for volumetric modulated arc therapy (VMAT) treatments of nasopharyngeal patients and studied the trade-off between treatment plan quality and delivery time in radiation therapy. Methods: For three nasopharyngeal patients, the authors generated treatment plans for nine different delivery scenarios using dedicated optimization methods. They compared these scenarios according to dose characteristics, number of beam directions, and estimated delivery times. In particular, the authors generated the following treatment plans: (1) a 4π plan, which is a not sequenced, fluence optimized plan that uses beam directions from approximately 1400 noncoplanar directionsmore » and marks a theoretical upper limit of the treatment plan quality, (2) a coplanar 2π plan with 72 coplanar beam directions as pendant to the noncoplanar 4π plan, (3) a coplanar VMAT plan, (4) a coplanar step and shoot (SnS) plan, (5) a beam angle optimized (BAO) coplanar SnS IMRT plan, (6) a noncoplanar BAO SnS plan, (7) a VMAT plan with rotated treatment couch, (8) a noncoplanar VMAT plan with an optimized great circle around the patient, and (9) a noncoplanar BAO VMAT plan with an arbitrary trajectory around the patient. Results: VMAT using optimized noncoplanar irradiation trajectories reduced the mean and maximum doses in organs at risk compared to coplanar VMAT plans by 19% on average while the target coverage remains constant. A coplanar BAO SnS plan was superior to coplanar SnS or VMAT; however, noncoplanar plans like a noncoplanar BAO SnS plan or noncoplanar VMAT yielded a better plan quality than the best coplanar 2π plan. The treatment plan quality of VMAT plans depended on the length of the trajectory. The delivery times of noncoplanar VMAT plans were estimated to be 6.5 min in average; 1.6 min longer than a coplanar plan but on average 2.8 min faster than a noncoplanar SnS plan with comparable treatment plan quality. Conclusions: The authors’ study reconfirms the dosimetric benefits of noncoplanar irradiation of nasopharyngeal tumors. Both SnS using optimized noncoplanar beam ensembles and VMAT using an optimized, arbitrary, noncoplanar trajectory enabled dose reductions in organs at risk compared to coplanar SnS and VMAT. Using great circles or simple couch rotations to implement noncoplanar VMAT, however, was not sufficient to yield meaningful improvements in treatment plan quality. The authors estimate that noncoplanar VMAT using arbitrary optimized irradiation trajectories comes at an increased delivery time compared to coplanar VMAT yet at a decreased delivery time compared to noncoplanar SnS IMRT.« less
NASA Astrophysics Data System (ADS)
Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre
2014-12-01
In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.
Workflows for Full Waveform Inversions
NASA Astrophysics Data System (ADS)
Boehm, Christian; Krischer, Lion; Afanasiev, Michael; van Driel, Martin; May, Dave A.; Rietmann, Max; Fichtner, Andreas
2017-04-01
Despite many theoretical advances and the increasing availability of high-performance computing clusters, full seismic waveform inversions still face considerable challenges regarding data and workflow management. While the community has access to solvers which can harness modern heterogeneous computing architectures, the computational bottleneck has fallen to these often manpower-bounded issues that need to be overcome to facilitate further progress. Modern inversions involve huge amounts of data and require a tight integration between numerical PDE solvers, data acquisition and processing systems, nonlinear optimization libraries, and job orchestration frameworks. To this end we created a set of libraries and applications revolving around Salvus (http://salvus.io), a novel software package designed to solve large-scale full waveform inverse problems. This presentation focuses on solving passive source seismic full waveform inversions from local to global scales with Salvus. We discuss (i) design choices for the aforementioned components required for full waveform modeling and inversion, (ii) their implementation in the Salvus framework, and (iii) how it is all tied together by a usable workflow system. We combine state-of-the-art algorithms ranging from high-order finite-element solutions of the wave equation to quasi-Newton optimization algorithms using trust-region methods that can handle inexact derivatives. All is steered by an automated interactive graph-based workflow framework capable of orchestrating all necessary pieces. This naturally facilitates the creation of new Earth models and hopefully sparks new scientific insights. Additionally, and even more importantly, it enhances reproducibility and reliability of the final results.
NASA Astrophysics Data System (ADS)
Winkel, D.; Bol, G. H.; van Asselen, B.; Hes, J.; Scholten, V.; Kerkmeijer, L. G. W.; Raaymakers, B. W.
2016-12-01
To develop an automated radiotherapy treatment planning and optimization workflow to efficiently create patient specifically optimized clinical grade treatment plans for prostate cancer and to implement it in clinical practice. A two-phased planning and optimization workflow was developed to automatically generate 77Gy 5-field simultaneously integrated boost intensity modulated radiation therapy (SIB-IMRT) plans for prostate cancer treatment. A retrospective planning study (n = 100) was performed in which automatically and manually generated treatment plans were compared. A clinical pilot (n = 21) was performed to investigate the usability of our method. Operator time for the planning process was reduced to <5 min. The retrospective planning study showed that 98 plans met all clinical constraints. Significant improvements were made in the volume receiving 72Gy (V72Gy) for the bladder and rectum and the mean dose of the bladder and the body. A reduced plan variance was observed. During the clinical pilot 20 automatically generated plans met all constraints and 17 plans were selected for treatment. The automated radiotherapy treatment planning and optimization workflow is capable of efficiently generating patient specifically optimized and improved clinical grade plans. It has now been adopted as the current standard workflow in our clinic to generate treatment plans for prostate cancer.
Siebers, Jeffrey V
2008-04-04
Monte Carlo (MC) is rarely used for IMRT plan optimization outside of research centres due to the extensive computational resources or long computation times required to complete the process. Time can be reduced by degrading the statistical precision of the MC dose calculation used within the optimization loop. However, this eventually introduces optimization convergence errors (OCEs). This study determines the statistical noise levels tolerated during MC-IMRT optimization under the condition that the optimized plan has OCEs <100 cGy (1.5% of the prescription dose) for MC-optimized IMRT treatment plans.Seven-field prostate IMRT treatment plans for 10 prostate patients are used in this study. Pre-optimization is performed for deliverable beams with a pencil-beam (PB) dose algorithm. Further deliverable-based optimization proceeds using: (1) MC-based optimization, where dose is recomputed with MC after each intensity update or (2) a once-corrected (OC) MC-hybrid optimization, where a MC dose computation defines beam-by-beam dose correction matrices that are used during a PB-based optimization. Optimizations are performed with nominal per beam MC statistical precisions of 2, 5, 8, 10, 15, and 20%. Following optimizer convergence, beams are re-computed with MC using 2% per beam nominal statistical precision and the 2 PTV and 10 OAR dose indices used in the optimization objective function are tallied. For both the MC-optimization and OC-optimization methods, statistical equivalence tests found that OCEs are less than 1.5% of the prescription dose for plans optimized with nominal statistical uncertainties of up to 10% per beam. The achieved statistical uncertainty in the patient for the 10% per beam simulations from the combination of the 7 beams is ~3% with respect to maximum dose for voxels with D>0.5D(max). The MC dose computation time for the OC-optimization is only 6.2 minutes on a single 3 Ghz processor with results clinically equivalent to high precision MC computations.
Scanning SQUID Microscope and its Application in Detecting Weak Currents
NASA Astrophysics Data System (ADS)
Zhong, Chaorong; Li, Fei; Zhang, Fenghui; Ding, Hongsheng; Luo, Sheng; Lin, Dehua; He, Yusheng
A scanning SQUID microscope based on HTS dc SQUID has been developed. One of the applications of this microscope is to detect weak currents inside the sample. Considering that what being detected by the SQUID is the vertical component of the magnetic field on a plan where the SQUID lies, whereas the current which produces the magnetic field is actually located in a plan below the SQUID, a TWO PLAN model has been established. In this model Biot-Savart force laws and Fourier transformation were used to inverse the detected magnetic field into the underneath weak current. It has been shown that the distance between the current and the SQUID and the noise ratio of the experimental data have significant effects on the quality of the inverse process.
Flight instrument and telemetry response and its inversion
NASA Technical Reports Server (NTRS)
Weinberger, M. R.
1971-01-01
Mathematical models of rate gyros, servo accelerometers, pressure transducers, and telemetry systems were derived and their parameters were obtained from laboratory tests. Analog computer simulations were used extensively for verification of the validity for fast and large input signals. An optimal inversion method was derived to reconstruct input signals from noisy output signals and a computer program was prepared.
Next-generation acceleration and code optimization for light transport in turbid media using GPUs
Alerstam, Erik; Lo, William Chun Yip; Han, Tianyi David; Rose, Jonathan; Andersson-Engels, Stefan; Lilge, Lothar
2010-01-01
A highly optimized Monte Carlo (MC) code package for simulating light transport is developed on the latest graphics processing unit (GPU) built for general-purpose computing from NVIDIA - the Fermi GPU. In biomedical optics, the MC method is the gold standard approach for simulating light transport in biological tissue, both due to its accuracy and its flexibility in modelling realistic, heterogeneous tissue geometry in 3-D. However, the widespread use of MC simulations in inverse problems, such as treatment planning for PDT, is limited by their long computation time. Despite its parallel nature, optimizing MC code on the GPU has been shown to be a challenge, particularly when the sharing of simulation result matrices among many parallel threads demands the frequent use of atomic instructions to access the slow GPU global memory. This paper proposes an optimization scheme that utilizes the fast shared memory to resolve the performance bottleneck caused by atomic access, and discusses numerous other optimization techniques needed to harness the full potential of the GPU. Using these techniques, a widely accepted MC code package in biophotonics, called MCML, was successfully accelerated on a Fermi GPU by approximately 600x compared to a state-of-the-art Intel Core i7 CPU. A skin model consisting of 7 layers was used as the standard simulation geometry. To demonstrate the possibility of GPU cluster computing, the same GPU code was executed on four GPUs, showing a linear improvement in performance with an increasing number of GPUs. The GPU-based MCML code package, named GPU-MCML, is compatible with a wide range of graphics cards and is released as an open-source software in two versions: an optimized version tuned for high performance and a simplified version for beginners (http://code.google.com/p/gpumcml). PMID:21258498
A musculoskeletal shoulder model based on pseudo-inverse and null-space optimization.
Terrier, Alexandre; Aeberhard, Martin; Michellod, Yvan; Mullhaupt, Philippe; Gillet, Denis; Farron, Alain; Pioletti, Dominique P
2010-11-01
The goal of the present work was assess the feasibility of using a pseudo-inverse and null-space optimization approach in the modeling of the shoulder biomechanics. The method was applied to a simplified musculoskeletal shoulder model. The mechanical system consisted in the arm, and the external forces were the arm weight, 6 scapulo-humeral muscles and the reaction at the glenohumeral joint, which was considered as a spherical joint. The muscle wrapping was considered around the humeral head assumed spherical. The dynamical equations were solved in a Lagrangian approach. The mathematical redundancy of the mechanical system was solved in two steps: a pseudo-inverse optimization to minimize the square of the muscle stress and a null-space optimization to restrict the muscle force to physiological limits. Several movements were simulated. The mathematical and numerical aspects of the constrained redundancy problem were efficiently solved by the proposed method. The prediction of muscle moment arms was consistent with cadaveric measurements and the joint reaction force was consistent with in vivo measurements. This preliminary work demonstrated that the developed algorithm has a great potential for more complex musculoskeletal modeling of the shoulder joint. In particular it could be further applied to a non-spherical joint model, allowing for the natural translation of the humeral head in the glenoid fossa. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.
Inverse Diffusion Curves Using Shape Optimization.
Zhao, Shuang; Durand, Fredo; Zheng, Changxi
2018-07-01
The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.
Callewaert, Francois; Butun, Serkan; Li, Zhongyang; Aydin, Koray
2016-01-01
The objective-first inverse-design algorithm is used to design an ultra-compact optical diode. Based on silicon and air only, this optical diode relies on asymmetric spatial mode conversion between the left and right ports. The first even mode incident from the left port is transmitted to the right port after being converted into an odd mode. On the other hand, same mode incident from the right port is reflected back by the optical diode dielectric structure. The convergence and performance of the algorithm are studied, along with a transform method that converts continuous permittivity medium into a binary material design. The optimal device is studied with full-wave electromagnetic simulations to compare its behavior under right and left incidences, in 2D and 3D settings as well. A parametric study is designed to understand the impact of the design space size and initial conditions on the optimized devices performance. A broadband optical diode behavior is observed after optimization, with a large rejection ratio between the two transmission directions. This illustrates the potential of the objective-first inverse-design method to design ultra-compact broadband photonic devices. PMID:27586852
NASA Astrophysics Data System (ADS)
Harken, B.; Geiges, A.; Rubin, Y.
2013-12-01
There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and forward modeling and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration, plume travel time, or aquifer recharge rate. These predictions often have significant bearing on some decision that must be made. Examples include: how to allocate limited remediation resources between multiple contaminated groundwater sites, where to place a waste repository site, and what extraction rates can be considered sustainable in an aquifer. Providing an answer to these questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in model parameters, such as hydraulic conductivity, leads to uncertainty in EPM predictions. Often, field campaigns and inverse modeling efforts are planned and undertaken with reduction of parametric uncertainty as the objective. The tool of hypothesis testing allows this to be taken one step further by considering uncertainty reduction in the ultimate prediction of the EPM as the objective and gives a rational basis for weighing costs and benefits at each stage. When using the tool of statistical hypothesis testing, the EPM is cast into a binary outcome. This is formulated as null and alternative hypotheses, which can be accepted and rejected with statistical formality. When accounting for all sources of uncertainty at each stage, the level of significance of this test provides a rational basis for planning, optimization, and evaluation of the entire campaign. Case-specific information, such as consequences prediction error and site-specific costs can be used in establishing selection criteria based on what level of risk is deemed acceptable. This framework is demonstrated and discussed using various synthetic case studies. The case studies involve contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a given location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical value of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. Different field campaigns are analyzed based on effectiveness in reducing the probability of selecting the wrong hypothesis, which in this case corresponds to reducing uncertainty in the prediction of plume arrival time. To examine the role of inverse modeling in this framework, case studies involving both Maximum Likelihood parameter estimation and Bayesian inversion are used.
A Sensitivity Analysis of Tsunami Inversions on the Number of Stations
NASA Astrophysics Data System (ADS)
An, Chao; Liu, Philip L.-F.; Meng, Lingsen
2018-05-01
Current finite-fault inversions of tsunami recordings generally adopt as many tsunami stations as possible to better constrain earthquake source parameters. In this study, inversions are evaluated by the waveform residual that measures the difference between model predictions and recordings, and the dependence of the quality of inversions on the number tsunami stations is derived. Results for the 2011 Tohoku event show that, if the tsunami stations are optimally located, the waveform residual decreases significantly with the number of stations when the number is 1 ˜ 4 and remains almost constant when the number is larger than 4, indicating that 2 ˜ 4 stations are able to recover the main characteristics of the earthquake source. The optimal location of tsunami stations is explained in the text. Similar analysis is applied to the Manila Trench in the South China Sea using artificially generated earthquakes and virtual tsunami stations. Results confirm that 2 ˜ 4 stations are necessary and sufficient to constrain the earthquake source parameters, and the optimal sites of stations are recommended in the text. The conclusion is useful for the design of new tsunami warning systems. Current strategies of tsunameter network design mainly focus on the early detection of tsunami waves from potential sources to coastal regions. We therefore recommend that, in addition to the current strategies, the waveform residual could also be taken into consideration so as to minimize the error of tsunami wave prediction for warning purposes.
NASA Astrophysics Data System (ADS)
Stavrakou, T.; Muller, J.; de Smedt, I.; van Roozendael, M.; Vrekoussis, M.; Wittrock, F.; Richter, A.; Burrows, J.
2008-12-01
Formaldehyde (HCHO) and glyoxal (CHOCHO) are carbonyls formed in the oxidation of volatile organic compounds (VOCs) emitted by plants, anthropogenic activities, and biomass burning. They are also directly emitted by fires. Although this primary production represents only a small part of the global source for both species, yet it can be locally important during intense fire events. Simultaneous observations of formaldehyde and glyoxal retrieved from the SCIAMACHY satellite instrument in 2005 and provided by the BIRA/IASB and the Bremen group, respectively, are compared with the corresponding columns simulated with the IMAGESv2 global CTM. The chemical mechanism has been optimized with respect to HCHO and CHOCHO production from pyrogenically emitted NMVOCs, based on the Master Chemical Mechanism (MCM) and on an explicit profile for biomass burning emissions. Gas-to-particle conversion of glyoxal in clouds and in aqueous aerosols is considered in the model. In this study we provide top-down estimates for fire emissions of HCHO and CHOCHO precursors by performing a two- compound inversion of emissions using the adjoint of the IMAGES model. The pyrogenic fluxes are optimized at the model resolution. The two-compound inversion offers the advantage that the information gained from measurements of one species constrains the sources of both compounds, due to the existence of common precursors. In a first inversion, only the burnt biomass amounts are optimized. In subsequent simulations, the emission factors for key individual NMVOC compounds are also varied.
Feasible Muscle Activation Ranges Based on Inverse Dynamics Analyses of Human Walking
Simpson, Cole S.; Sohn, M. Hongchul; Allen, Jessica L.; Ting, Lena H.
2015-01-01
Although it is possible to produce the same movement using an infinite number of different muscle activation patterns owing to musculoskeletal redundancy, the degree to which observed variations in muscle activity can deviate from optimal solutions computed from biomechanical models is not known. Here, we examined the range of biomechanically permitted activation levels in individual muscles during human walking using a detailed musculoskeletal model and experimentally-measured kinetics and kinematics. Feasible muscle activation ranges define the minimum and maximum possible level of each muscle’s activation that satisfy inverse dynamics joint torques assuming that all other muscles can vary their activation as needed. During walking, 73% of the muscles had feasible muscle activation ranges that were greater than 95% of the total muscle activation range over more than 95% of the gait cycle, indicating that, individually, most muscles could be fully active or fully inactive while still satisfying inverse dynamics joint torques. Moreover, the shapes of the feasible muscle activation ranges did not resemble previously-reported muscle activation patterns nor optimal solutions, i.e. static optimization and computed muscle control, that are based on the same biomechanical constraints. Our results demonstrate that joint torque requirements from standard inverse dynamics calculations are insufficient to define the activation of individual muscles during walking in healthy individuals. Identifying feasible muscle activation ranges may be an effective way to evaluate the impact of additional biomechanical and/or neural constraints on possible versus actual muscle activity in both normal and impaired movements. PMID:26300401
Run-to-Run Optimization Control Within Exact Inverse Framework for Scan Tracking.
Yeoh, Ivan L; Reinhall, Per G; Berg, Martin C; Chizeck, Howard J; Seibel, Eric J
2017-09-01
A run-to-run optimization controller uses a reduced set of measurement parameters, in comparison to more general feedback controllers, to converge to the best control point for a repetitive process. A new run-to-run optimization controller is presented for the scanning fiber device used for image acquisition and display. This controller utilizes very sparse measurements to estimate a system energy measure and updates the input parameterizations iteratively within a feedforward with exact-inversion framework. Analysis, simulation, and experimental investigations on the scanning fiber device demonstrate improved scan accuracy over previous methods and automatic controller adaptation to changing operating temperature. A specific application example and quantitative error analyses are provided of a scanning fiber endoscope that maintains high image quality continuously across a 20 °C temperature rise without interruption of the 56 Hz video.
Textural feature calculated from segmental fluences as a modulation index for VMAT.
Park, So-Yeon; Park, Jong Min; Kim, Jung-In; Kim, Hyoungnyoun; Kim, Il Han; Ye, Sung-Joon
2015-12-01
Textural features calculated from various segmental fluences of volumetric modulated arc therapy (VMAT) plans were optimized to enhance its performance to predict plan delivery accuracy. Twenty prostate and twenty head and neck VMAT plans were selected retrospectively. Fluences were generated for each VMAT plan by summations of segments at sequential groups of control points. The numbers of summed segments were 5, 10, 20, 45, 90, 178 and 356. For each fluence, we investigated 6 textural features: angular second moment, inverse difference moment, contrast, variance, correlation and entropy (particular displacement distances, d = 1, 5 and 10). Spearman's rank correlation coefficients (rs) were calculated between each textural feature and several different measures of VMAT delivery accuracy. The values of rs of contrast (d = 10) with 10 segments to both global and local gamma passing rates with 2%/2 mm were 0.666 (p <0.001) and 0.573 (p <0.001), respectively. It showed rs values of -0.895 (p <0.001) and 0.727 (p <0.001) to multi-leaf collimator positional errors and gantry angle errors during delivery, respectively. The number of statistically significant rs values (p <0.05) to the changes in dose-volumetric parameters during delivery was 14 among a total of 35 tested parameters. Contrast (d = 10) with 10 segments showed higher correlations to the VMAT delivery accuracy than did the conventional modulation indices. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Topology-Optimized Multilayered Metaoptics
NASA Astrophysics Data System (ADS)
Lin, Zin; Groever, Benedikt; Capasso, Federico; Rodriguez, Alejandro W.; Lončar, Marko
2018-04-01
We propose a general topology-optimization framework for metasurface inverse design that can automatically discover highly complex multilayered metastructures with increased functionalities. In particular, we present topology-optimized multilayered geometries exhibiting angular phase control, including a single-piece nanophotonic metalens with angular aberration correction, as well as an angle-convergent metalens that focuses light onto the same focal spot regardless of the angle of incidence.
Torsional Ultrasound Sensor Optimization for Soft Tissue Characterization
Melchor, Juan; Muñoz, Rafael; Rus, Guillermo
2017-01-01
Torsion mechanical waves have the capability to characterize shear stiffness moduli of soft tissue. Under this hypothesis, a computational methodology is proposed to design and optimize a piezoelectrics-based transmitter and receiver to generate and measure the response of torsional ultrasonic waves. The procedure employed is divided into two steps: (i) a finite element method (FEM) is developed to obtain a transmitted and received waveform as well as a resonance frequency of a previous geometry validated with a semi-analytical simplified model and (ii) a probabilistic optimality criteria of the design based on inverse problem from the estimation of robust probability of detection (RPOD) to maximize the detection of the pathology defined in terms of changes of shear stiffness. This study collects different options of design in two separated models, in transmission and contact, respectively. The main contribution of this work describes a framework to establish such as forward, inverse and optimization procedures to choose a set of appropriate parameters of a transducer. This methodological framework may be generalizable for other different applications. PMID:28617353
Inverse Optimization of Plasmonic and Antireflective Grating in Thin Film PV Cells
NASA Astrophysics Data System (ADS)
Hajimirza, Shima; Howell, John
2012-06-01
This work addresses inverse optimization of three dimensional front and back surface texture grating specifications, for the purpose of shaping the absorptivity spectrum of silicon thin film cells in targeted ways. Periodic plasmonic gratings with dimensions comparable or less than the incident light wavelength are known to enhance light absorption. We consider surface patterning of amorphous silicon (a-Si) thin films using front and/or back metallic nanostrips and ITO coatings, and show that wideband enhancement in unpolarized absorptivity spectrum can be achieved when back reflectors are used. The overall short circuit current enhancement using such structures is significant and can be as high as 97%. For TM-polarized wave it can be even higher as reported in previous work. In this work however, we focus on the optimization for the more realistic unpolarized radiation which is of significantly higher complexity. In addition, optimization is done with respect to two objective functions independently: spectral absorptivity and gain-bandwidth product of the absorptivity spectrum.
NASA Astrophysics Data System (ADS)
Massin, F.; Malcolm, A. E.
2017-12-01
Knowing earthquake source mechanisms gives valuable information for earthquake response planning and hazard mitigation. Earthquake source mechanisms can be analyzed using long period waveform inversion (for moderate size sources with sufficient signal to noise ratio) and body-wave first motion polarity or amplitude ratio inversion (for micro-earthquakes with sufficient data coverage). A robust approach that gives both source mechanisms and their associated probabilities across all source scales would greatly simplify the determination of source mechanisms and allow for more consistent interpretations of the results. Following previous work on shift and stack approaches, we develop such a probabilistic source mechanism analysis, using waveforms, which does not require polarity picking. For a given source mechanism, the first period of the observed body-waves is selected for all stations, multiplied by their corresponding theoretical polarity and stacked together. (The first period is found from a manually picked travel time by measuring the central period where the signal power is concentrated, using the second moment of the power spectral density function.) As in other shift and stack approaches, our method is not based on the optimization of an objective function through an inversion. Instead, the power of the polarity-corrected stack is a proxy for the likelihood of the trial source mechanism, with the most powerful stack corresponding to the most likely source mechanism. Using synthetic data, we test our method for robustness to the data coverage, coverage gap, signal to noise ratio, travel-time picking errors and non-double couple component. We then present results for field data in a volcano-tectonic context. Our results are reliable when constrained by 15 body-wavelets, with gap below 150 degrees, signal to noise ratio over 1 and arrival time error below a fifth of the period (0.2T) of the body-wave. We demonstrate that the source scanning approach for source mechanism analysis has similar advantages to waveform inversion (full waveform data, no manual intervention, probabilistic approach) and similar applicability to polarity inversion (any source size, any instrument type).
Target dose conformity in 3-dimensional conformal radiotherapy and intensity modulated radiotherapy.
Wu, Vincent W C; Kwong, Dora L W; Sham, Jonathan S T
2004-05-01
Dose conformity to the planning target volume is an important criterion in radiotherapy treatment planning, for which the conformity index is a useful assessment tool. The purpose of this study is to compare the differences in CI for the treatment planning of four cancers including the nasopharynx, oesophagus, lung and prostate. Seventy patients with cancers of nasopharynx (30), oesophagus (15), lung (15) and prostate (10) were recruited. Each of these patients was planned with three sets of treatment plans using the FOCUS treatment planning system: the forward and inverse 3DCRT plans and the IMRT plan. The CI was generated for each treatment plan. The mean CI from each cancer patient group was calculated and compared with the other three cancer groups. The mean value of CI was also compared among the three planning methods. The oesophageal and lung cancers demonstrated relatively higher overall mean CI values (0.64 and 0.62, respectively), whereas that of the nasopharynx and prostate were lower (0.54 and 0.50, respectively). With regards to the planning method groups, the IMRT plans produced the highest overall mean CI (0.62), while those for the forward and inverse 3DCRT were similar (0.57 and 0.55, respectively). For the four selected cancers, oesophageal and lung cancers were easier to conform than the nasopharyngeal and prostate cancers. The IMRT plans were more effective in achieving better dose conformity than that of the 3DCRT.
Multiple estimation channel decoupling and optimization method based on inverse system
NASA Astrophysics Data System (ADS)
Wu, Peng; Mu, Rongjun; Zhang, Xin; Deng, Yanpeng
2018-03-01
This paper addressed the intelligent autonomous navigation request of intelligent deformation missile, based on the intelligent deformation missile dynamics and kinematics modeling, navigation subsystem solution method and error modeling, and then focuses on the corresponding data fusion and decision fusion technology, decouples the sensitive channel of the filter input through the inverse system of design dynamics to reduce the influence of sudden change of the measurement information on the filter input. Then carrying out a series of simulation experiments, which verified the feasibility of the inverse system decoupling algorithm effectiveness.
NASA Astrophysics Data System (ADS)
Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei
2013-08-01
develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.
Metamodel-based inverse method for parameter identification: elastic-plastic damage model
NASA Astrophysics Data System (ADS)
Huang, Changwu; El Hami, Abdelkhalak; Radi, Bouchaïb
2017-04-01
This article proposed a metamodel-based inverse method for material parameter identification and applies it to elastic-plastic damage model parameter identification. An elastic-plastic damage model is presented and implemented in numerical simulation. The metamodel-based inverse method is proposed in order to overcome the disadvantage in computational cost of the inverse method. In the metamodel-based inverse method, a Kriging metamodel is constructed based on the experimental design in order to model the relationship between material parameters and the objective function values in the inverse problem, and then the optimization procedure is executed by the use of a metamodel. The applications of the presented material model and proposed parameter identification method in the standard A 2017-T4 tensile test prove that the presented elastic-plastic damage model is adequate to describe the material's mechanical behaviour and that the proposed metamodel-based inverse method not only enhances the efficiency of parameter identification but also gives reliable results.
Vaudaux, Catherine; Schneider, Uwe; Kaser-Hotz, Barbara
2007-01-01
We evaluated the impact of inverse planned intensity-modulated radiation therapy (IMRT) on the dose-volume histograms (DVHs) and on the normal tissue complication probabilities (NTCPs) of brain and eyes in dogs with nasal tumors. Nine dogs with large, caudally located nasal tumors were planned using conventional techniques and inverse planned IMRT for a total prescribed dose of 52.5 Gy in 3.5 Gy fractions. The equivalent uniform dose for brain and eyes was calculated to estimate the normal tissue complication probability (NTCP) of these organs. The NTCP values as well as the DVHs were used to compare the treatment plans. The dose distribution in IMRT plans was more conformal than in conventional plans. The average dose delivered to one-third of the brain was 10 Gy lower with the IMRT plan compared with conventional planning. The mean partial brain volume receiving 43.6 Gy or more was reduced by 25.6% with IMRT. As a consequence, the NTCPs were also significantly lower in the IMRT plans. The mean NTCP of brain was two times lower and at least one eye could be saved in all patients planed with IMRT. Another possibility with IMRT is dose escalation in the target to improve tumor control while keeping the NTCPs at the same level as for conventional planning. Veterinary
NASA Astrophysics Data System (ADS)
Bejarano Buele, Ana Isabel
The treatment regimen for breast cancer patients typically involves Whole Breast Irradiation (WBI). The coverage and extent of the radiation treatment is dictated by location of tumor mass, breast tissue distribution, involvement of lymph nodes, and other factors. The current standard treatment approach used at our institution is a 3D tangential beam geometry, which involves two fields irradiating the breast, or a four field beam arrangement covering the whole breast and involved nodes, while decreasing the dose to organs as risk (OARs) such as the lung and heart. The coverage of these targets can be difficult to achieve in patients with unfavorable thoracic geometries, especially in those cases in which the planning target volume (PTV) is extended to the chest wall. It is a well-known fact that exposure of the heart to ionizing radiation has been proved to increase the subsequent rate of ischemic heart disease. In these cases, inverse planned treatments have become a proven alternative to the 3D approach. The goal of this research project is to evaluate the factors that affect our current techniques as well as to adapt the development of inverse modulated techniques for our clinic, in which breast cancer patients are one of the largest populations treated. For this purpose, a dosimetric comparison along with the evaluation of immobilization devices was necessary. Radiation treatment plans were designed and dosimetrically compared for 5 patients in both, supine and prone positions. For 8 patients, VMAT and IMRT plans were created and evaluated in the supine position. Skin flash incorporation for inverse modulated plans required measurement of the surface dose as well as an evaluation of breast volume changes during a treatment course. It was found that prone 3D conformal plans as well as the VMAT and IMRT plans are generally superior in sparing OARs to supine plans with comparable PTV coverage. Prone setup leads to larger shifts in breast volume as well as in positioning due to the difference in target geometry and nature of the immobilization device. IMRT and VMAT plans offer sparing of OARs from high dose regions with an increase of irradiated volume in the low dose regions. Skin flash incorporation was found to be accurate with the use of virtual bolus in the TPS for inverse modulated plans. Various factors influencing dose delivery in breast cancer radiation treatments were examined and quantified. Practical recommendations developed in the course of this project can improve our current techniques and provide alternatives to treat unique and challenging clinical cases.
Improving IMRT delivery efficiency with reweighted L1-minimization for inverse planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hojin; Becker, Stephen; Lee, Rena
2013-07-15
Purpose: This study presents an improved technique to further simplify the fluence-map in intensity modulated radiation therapy (IMRT) inverse planning, thereby reducing plan complexity and improving delivery efficiency, while maintaining the plan quality.Methods: First-order total-variation (TV) minimization (min.) based on L1-norm has been proposed to reduce the complexity of fluence-map in IMRT by generating sparse fluence-map variations. However, with stronger dose sparing to the critical structures, the inevitable increase in the fluence-map complexity can lead to inefficient dose delivery. Theoretically, L0-min. is the ideal solution for the sparse signal recovery problem, yet practically intractable due to its nonconvexity of themore » objective function. As an alternative, the authors use the iteratively reweighted L1-min. technique to incorporate the benefits of the L0-norm into the tractability of L1-min. The weight multiplied to each element is inversely related to the magnitude of the corresponding element, which is iteratively updated by the reweighting process. The proposed penalizing process combined with TV min. further improves sparsity in the fluence-map variations, hence ultimately enhancing the delivery efficiency. To validate the proposed method, this work compares three treatment plans obtained from quadratic min. (generally used in clinic IMRT), conventional TV min., and our proposed reweighted TV min. techniques, implemented by a large-scale L1-solver (template for first-order conic solver), for five patient clinical data. Criteria such as conformation number (CN), modulation index (MI), and estimated treatment time are employed to assess the relationship between the plan quality and delivery efficiency.Results: The proposed method yields simpler fluence-maps than the quadratic and conventional TV based techniques. To attain a given CN and dose sparing to the critical organs for 5 clinical cases, the proposed method reduces the number of segments by 10-15 and 30-35, relative to TV min. and quadratic min. based plans, while MIs decreases by about 20%-30% and 40%-60% over the plans by two existing techniques, respectively. With such conditions, the total treatment time of the plans obtained from our proposed method can be reduced by 12-30 s and 30-80 s mainly due to greatly shorter multileaf collimator (MLC) traveling time in IMRT step-and-shoot delivery.Conclusions: The reweighted L1-minimization technique provides a promising solution to simplify the fluence-map variations in IMRT inverse planning. It improves the delivery efficiency by reducing the entire segments and treatment time, while maintaining the plan quality in terms of target conformity and critical structure sparing.« less
Rayleigh wave nonlinear inversion based on the Firefly algorithm
NASA Astrophysics Data System (ADS)
Zhou, Teng-Fei; Peng, Geng-Xin; Hu, Tian-Yue; Duan, Wen-Sheng; Yao, Feng-Chang; Liu, Yi-Mou
2014-06-01
Rayleigh waves have high amplitude, low frequency, and low velocity, which are treated as strong noise to be attenuated in reflected seismic surveys. This study addresses how to identify useful shear wave velocity profile and stratigraphic information from Rayleigh waves. We choose the Firefly algorithm for inversion of surface waves. The Firefly algorithm, a new type of particle swarm optimization, has the advantages of being robust, highly effective, and allows global searching. This algorithm is feasible and has advantages for use in Rayleigh wave inversion with both synthetic models and field data. The results show that the Firefly algorithm, which is a robust and practical method, can achieve nonlinear inversion of surface waves with high resolution.
Recursive inverse factorization.
Rubensson, Emanuel H; Bock, Nicolas; Holmström, Erik; Niklasson, Anders M N
2008-03-14
A recursive algorithm for the inverse factorization S(-1)=ZZ(*) of Hermitian positive definite matrices S is proposed. The inverse factorization is based on iterative refinement [A.M.N. Niklasson, Phys. Rev. B 70, 193102 (2004)] combined with a recursive decomposition of S. As the computational kernel is matrix-matrix multiplication, the algorithm can be parallelized and the computational effort increases linearly with system size for systems with sufficiently sparse matrices. Recent advances in network theory are used to find appropriate recursive decompositions. We show that optimization of the so-called network modularity results in an improved partitioning compared to other approaches. In particular, when the recursive inverse factorization is applied to overlap matrices of irregularly structured three-dimensional molecules.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan Chan Tseung, H; Ma, J; Ma, D
2015-06-15
Purpose: To demonstrate the feasibility of fast Monte Carlo (MC) based biological planning for the treatment of thyroid tumors in spot-scanning proton therapy. Methods: Recently, we developed a fast and accurate GPU-based MC simulation of proton transport that was benchmarked against Geant4.9.6 and used as the dose calculation engine in a clinically-applicable GPU-accelerated IMPT optimizer. Besides dose, it can simultaneously score the dose-averaged LET (LETd), which makes fast biological dose (BD) estimates possible. To convert from LETd to BD, we used a linear relation based on cellular irradiation data. Given a thyroid patient with a 93cc tumor volume, we createdmore » a 2-field IMPT plan in Eclipse (Varian Medical Systems). This plan was re-calculated with our MC to obtain the BD distribution. A second 5-field plan was made with our in-house optimizer, using pre-generated MC dose and LETd maps. Constraints were placed to maintain the target dose to within 25% of the prescription, while maximizing the BD. The plan optimization and calculation of dose and LETd maps were performed on a GPU cluster. The conventional IMPT and biologically-optimized plans were compared. Results: The mean target physical and biological doses from our biologically-optimized plan were, respectively, 5% and 14% higher than those from the MC re-calculation of the IMPT plan. Dose sparing to critical structures in our plan was also improved. The biological optimization, including the initial dose and LETd map calculations, can be completed in a clinically viable time (∼30 minutes) on a cluster of 25 GPUs. Conclusion: Taking advantage of GPU acceleration, we created a MC-based, biologically optimized treatment plan for a thyroid patient. Compared to a standard IMPT plan, a 5% increase in the target’s physical dose resulted in ∼3 times as much increase in the BD. Biological planning was thus effective in escalating the target BD.« less
Lagerwaard, Frank; Bohoudi, Omar; Tetar, Shyama; Admiraal, Marjan A; Rosario, Tezontl S; Bruynzeel, Anna
2018-04-05
Magnetic resonance-guided radiation therapy (MRgRT) not only allows for superior soft-tissue setup and online MR-guidance during delivery but also for inter-fractional plan re-optimization or adaptation. This plan adaptation involves repeat MR imaging, organs at risk (OARs) re-contouring, plan prediction (i.e., recalculating the baseline plan on the anatomy of that moment), plan re-optimization, and plan quality assurance. In contrast, intrafractional plan adaptation cannot be simply performed by pausing delivery at any given moment, adjusting contours, and re-optimization because of the complex and composite nature of deformable dose accumulation. To overcome this limitation, we applied a practical workaround by partitioning treatment fractions, each with half the original fraction dose. In between successive deliveries, the patient remained in the treatment position and all steps of the initial plan adaptation were repeated. Thus, this second re-optimization served as an intrafractional plan adaptation at 50% of the total delivery. The practical feasibility of this partitioning approach was evaluated in a patient treated with MRgRT for locally advanced pancreatic cancer (LAPC). MRgRT was delivered in 40Gy in 10 fractions, with two fractions scheduled successively on each treatment day. The contoured gross tumor volume (GTV) was expanded by 3 mm, excluding parts of the OARs within this expansion to derive the planning target volume for daily re-optimization (PTV OPT ). The baseline GTVV 95% achieved in this patient was 80.0% to adhere to the high-dose constraints for the duodenum, stomach, and bowel (V 33 Gy <1 cc and V 36 Gy <0.1 cc). Treatment was performed on the MRIdian (ViewRay Inc, Mountain View, USA) using video-assisted breath-hold in shallow inspiration. The dual plan adaptation resulted, for each partitioned fraction, in the generation of Plan PREDICTED1 , Plan RE-OPTIMIZED1 (inter-fractional adaptation), Plan PREDICTED2 , and Plan RE-OPTIMIZED2 (intrafractional adaptation). An offline analysis was performed to evaluate the benefit of inter-fractional versus intrafractional plan adaptation with respect to GTV coverage and high-dose OARs sparing for all five partitioned fractions. Interfractional changes in adjacent OARs were substantially larger than intrafractional changes. Mean GTV V 95% was 76.8 ± 1.8% (Plan PREDICTED1 ), 83.4 ± 5.7% (Plan RE-OPTIMIZED1 ), 82.5 ± 4.3% (Plan PREDICTED2 ),and 84.4 ± 4.4% (Plan RE-OPTIMIZED2 ). Both plan re-optimizations appeared important for correcting the inappropriately high duodenal V 33 Gy values of 3.6 cc (Plan PREDICTED1 ) and 3.9 cc (Plan PREDICTED2 ) to 0.2 cc for both re-optimizations. To a smaller extent, this improvement was also observed for V 25 Gy values. For the stomach, bowel, and all other OARs, high and intermediate doses were well below preset constraints, even without re-optimization. The mean delivery time of each daily treatment was 90 minutes. This study presents the clinical application of combined inter-fractional and intrafractional plan adaptation during MRgRT for LAPC using fraction partitioning with successive re-optimization. Whereas, in this study, interfractional plan adaptation appeared to benefit both GTV coverage and OARs sparing, intrafractional adaptation was particularly useful for high-dose OARs sparing. Although all necessary steps lead to a prolonged treatment duration, this may be applied in selected cases where high doses to adjacent OARs are regarded as critical.
Bohoudi, Omar; Tetar, Shyama; Admiraal, Marjan A; Rosario, Tezontl S; Bruynzeel, Anna
2018-01-01
Magnetic resonance-guided radiation therapy (MRgRT) not only allows for superior soft-tissue setup and online MR-guidance during delivery but also for inter-fractional plan re-optimization or adaptation. This plan adaptation involves repeat MR imaging, organs at risk (OARs) re-contouring, plan prediction (i.e., recalculating the baseline plan on the anatomy of that moment), plan re-optimization, and plan quality assurance. In contrast, intrafractional plan adaptation cannot be simply performed by pausing delivery at any given moment, adjusting contours, and re-optimization because of the complex and composite nature of deformable dose accumulation. To overcome this limitation, we applied a practical workaround by partitioning treatment fractions, each with half the original fraction dose. In between successive deliveries, the patient remained in the treatment position and all steps of the initial plan adaptation were repeated. Thus, this second re-optimization served as an intrafractional plan adaptation at 50% of the total delivery. The practical feasibility of this partitioning approach was evaluated in a patient treated with MRgRT for locally advanced pancreatic cancer (LAPC). MRgRT was delivered in 40Gy in 10 fractions, with two fractions scheduled successively on each treatment day. The contoured gross tumor volume (GTV) was expanded by 3 mm, excluding parts of the OARs within this expansion to derive the planning target volume for daily re-optimization (PTVOPT). The baseline GTVV95% achieved in this patient was 80.0% to adhere to the high-dose constraints for the duodenum, stomach, and bowel (V33 Gy <1 cc and V36 Gy <0.1 cc). Treatment was performed on the MRIdian (ViewRay Inc, Mountain View, USA) using video-assisted breath-hold in shallow inspiration. The dual plan adaptation resulted, for each partitioned fraction, in the generation of PlanPREDICTED1, PlanRE-OPTIMIZED1 (inter-fractional adaptation), PlanPREDICTED2, and PlanRE-OPTIMIZED2 (intrafractional adaptation). An offline analysis was performed to evaluate the benefit of inter-fractional versus intrafractional plan adaptation with respect to GTV coverage and high-dose OARs sparing for all five partitioned fractions. Interfractional changes in adjacent OARs were substantially larger than intrafractional changes. Mean GTV V95% was 76.8 ± 1.8% (PlanPREDICTED1), 83.4 ± 5.7% (PlanRE-OPTIMIZED1), 82.5 ± 4.3% (PlanPREDICTED2),and 84.4 ± 4.4% (PlanRE-OPTIMIZED2). Both plan re-optimizations appeared important for correcting the inappropriately high duodenal V33 Gy values of 3.6 cc (PlanPREDICTED1) and 3.9 cc (PlanPREDICTED2) to 0.2 cc for both re-optimizations. To a smaller extent, this improvement was also observed for V25 Gy values. For the stomach, bowel, and all other OARs, high and intermediate doses were well below preset constraints, even without re-optimization. The mean delivery time of each daily treatment was 90 minutes. This study presents the clinical application of combined inter-fractional and intrafractional plan adaptation during MRgRT for LAPC using fraction partitioning with successive re-optimization. Whereas, in this study, interfractional plan adaptation appeared to benefit both GTV coverage and OARs sparing, intrafractional adaptation was particularly useful for high-dose OARs sparing. Although all necessary steps lead to a prolonged treatment duration, this may be applied in selected cases where high doses to adjacent OARs are regarded as critical. PMID:29876156
Reconstructing surface wave profiles from reflected acoustic pulses using multiple receivers.
Walstead, Sean P; Deane, Grant B
2014-08-01
Surface wave shapes are determined by analyzing underwater reflected acoustic signals collected at multiple receivers. The transmitted signals are of nominal frequency 300 kHz and are reflected off surface gravity waves that are paddle-generated in a wave tank. An inverse processing algorithm reconstructs 50 surface wave shapes over a length span of 2.10 m. The inverse scheme uses a broadband forward scattering model based on Kirchhoff's diffraction formula to determine wave shapes. The surface reconstruction algorithm is self-starting in that source and receiver geometry and initial estimates of wave shape are determined from the same acoustic signals used in the inverse processing. A high speed camera provides ground-truth measurements of the surface wave field for comparison with the acoustically derived surface waves. Within Fresnel zone regions the statistical confidence of the inversely optimized surface profile exceeds that of the camera profile. Reconstructed surfaces are accurate to a resolution of about a quarter-wavelength of the acoustic pulse only within Fresnel zones associated with each source and receiver pair. Multiple isolated Fresnel zones from multiple receivers extend the spatial extent of accurate surface reconstruction while overlapping Fresnel zones increase confidence in the optimized profiles there.
Liu, Chun; Kroll, Andreas
2016-01-01
Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef; Fast P.; Kraus, M.
2006-01-01
Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern after the anthrax attacks of 2001. The ability to characterize such attacks, i.e., to estimate the number of people infected, the time of infection, and the average dose received, is important when planning a medical response. We address this question of characterization by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To be of relevance to response planning, we limit ourselves to 3-5 days of data. In tests performed with anthrax as the pathogen, we find that thesemore » data are usually sufficient, especially if the model of the outbreak used in the inverse problem is an accurate one. In some cases the scarcity of data may initially support outbreak characterizations at odds with the true one, but with sufficient data the correct inferences are recovered; in other words, the inverse problem posed and its solution methodology are consistent. We also explore the effect of model error-situations for which the model used in the inverse problem is only a partially accurate representation of the outbreak; here, the model predictions and the observations differ by more than a random noise. We find that while there is a consistent discrepancy between the inferred and the true characterizations, they are also close enough to be of relevance when planning a response.« less
The novel high-performance 3-D MT inverse solver
NASA Astrophysics Data System (ADS)
Kruglyakov, Mikhail; Geraskin, Alexey; Kuvshinov, Alexey
2016-04-01
We present novel, robust, scalable, and fast 3-D magnetotelluric (MT) inverse solver. The solver is written in multi-language paradigm to make it as efficient, readable and maintainable as possible. Separation of concerns and single responsibility concepts go through implementation of the solver. As a forward modelling engine a modern scalable solver extrEMe, based on contracting integral equation approach, is used. Iterative gradient-type (quasi-Newton) optimization scheme is invoked to search for (regularized) inverse problem solution, and adjoint source approach is used to calculate efficiently the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT responses, and supports massive parallelization. Moreover, different parallelization strategies implemented in the code allow optimal usage of available computational resources for a given problem statement. To parameterize an inverse domain the so-called mask parameterization is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to HPC Piz Daint (6th supercomputer in the world) demonstrate practically linear scalability of the code up to thousands of nodes.
Peel, Sarah A; Hussain, Tarique; Cecelja, Marina; Abbas, Abeera; Greil, Gerald F; Chowienczyk, Philip; Spector, Tim; Smith, Alberto; Waltham, Matthew; Botnar, Rene M
2011-11-01
To accelerate and optimize black blood properties of the quadruple inversion recovery (QIR) technique for imaging the abdominal aortic wall. QIR inversion delays were optimized for different heart rates in simulations and phantom studies by minimizing the steady state magnetization of blood for T(1) = 100-1400 ms. To accelerate and improve black blood properties of aortic vessel wall imaging, the QIR prepulse was combined with zoom imaging and (a) "traditional" and (b) "trailing" electrocardiogram (ECG) triggering. Ten volunteers were imaged pre- and post-contrast administration using a conventional ECG-triggered double inversion recovery (DIR) and the two QIR implementations in combination with a zoom-TSE readout. The QIR implemented with "trailing" ECG-triggering resulted in consistently good blood suppression as the second inversion delay was timed during maximum systolic flow in the aorta. The blood signal-to-noise ratio and vessel wall to blood contrast-to-noise ratio, vessel wall sharpness, and image quality scores showed a statistically significant improvement compared with the traditional QIR implementation with and without ECG-triggering. We demonstrate that aortic vessel wall imaging can be accelerated with zoom imaging and that "trailing" ECG-triggering improves black blood properties of the aorta which is subject to motion and variable blood flow during the cardiac cycle. Copyright © 2011 Wiley Periodicals, Inc.
Sorting signed permutations by inversions in O(nlogn) time.
Swenson, Krister M; Rajan, Vaibhav; Lin, Yu; Moret, Bernard M E
2010-03-01
The study of genomic inversions (or reversals) has been a mainstay of computational genomics for nearly 20 years. After the initial breakthrough of Hannenhalli and Pevzner, who gave the first polynomial-time algorithm for sorting signed permutations by inversions, improved algorithms have been designed, culminating with an optimal linear-time algorithm for computing the inversion distance and a subquadratic algorithm for providing a shortest sequence of inversions--also known as sorting by inversions. Remaining open was the question of whether sorting by inversions could be done in O(nlogn) time. In this article, we present a qualified answer to this question, by providing two new sorting algorithms, a simple and fast randomized algorithm and a deterministic refinement. The deterministic algorithm runs in time O(nlogn + kn), where k is a data-dependent parameter. We provide the results of extensive experiments showing that both the average and the standard deviation for k are small constants, independent of the size of the permutation. We conclude (but do not prove) that almost all signed permutations can be sorted by inversions in O(nlogn) time.
The effect of different control point sampling sequences on convergence of VMAT inverse planning
NASA Astrophysics Data System (ADS)
Pardo Montero, Juan; Fenwick, John D.
2011-04-01
A key component of some volumetric-modulated arc therapy (VMAT) optimization algorithms is the progressive addition of control points to the optimization. This idea was introduced in Otto's seminal VMAT paper, in which a coarse sampling of control points was used at the beginning of the optimization and new control points were progressively added one at a time. A different form of the methodology is also present in the RapidArc optimizer, which adds new control points in groups called 'multiresolution levels', each doubling the number of control points in the optimization. This progressive sampling accelerates convergence, improving the results obtained, and has similarities with the ordered subset algorithm used to accelerate iterative image reconstruction. In this work we have used a VMAT optimizer developed in-house to study the performance of optimization algorithms which use different control point sampling sequences, most of which fall into three different classes: doubling sequences, which add new control points in groups such that the number of control points in the optimization is (roughly) doubled; Otto-like progressive sampling which adds one control point at a time, and equi-length sequences which contain several multiresolution levels each with the same number of control points. Results are presented in this study for two clinical geometries, prostate and head-and-neck treatments. A dependence of the quality of the final solution on the number of starting control points has been observed, in agreement with previous works. We have found that some sequences, especially E20 and E30 (equi-length sequences with 20 and 30 multiresolution levels, respectively), generate better results than a 5 multiresolution level RapidArc-like sequence. The final value of the cost function is reduced up to 20%, such reductions leading to small improvements in dosimetric parameters characterizing the treatments—slightly more homogeneous target doses and better sparing of the organs at risk.
A comprehensive formulation for volumetric modulated arc therapy planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Dan; Lyu, Qihui; Ruan, Dan
2016-07-15
Purpose: Volumetric modulated arc therapy (VMAT) is a widely employed radiation therapy technique, showing comparable dosimetry to static beam intensity modulated radiation therapy (IMRT) with reduced monitor units and treatment time. However, the current VMAT optimization has various greedy heuristics employed for an empirical solution, which jeopardizes plan consistency and quality. The authors introduce a novel direct aperture optimization method for VMAT to overcome these limitations. Methods: The comprehensive VMAT (comVMAT) planning was formulated as an optimization problem with an L2-norm fidelity term to penalize the difference between the optimized dose and the prescribed dose, as well as an anisotropicmore » total variation term to promote piecewise continuity in the fluence maps, preparing it for direct aperture optimization. A level set function was used to describe the aperture shapes and the difference between aperture shapes at adjacent angles was penalized to control MLC motion range. A proximal-class optimization solver was adopted to solve the large scale optimization problem, and an alternating optimization strategy was implemented to solve the fluence intensity and aperture shapes simultaneously. Single arc comVMAT plans, utilizing 180 beams with 2° angular resolution, were generated for a glioblastoma multiforme case, a lung (LNG) case, and two head and neck cases—one with three PTVs (H&N{sub 3PTV}) and one with foue PTVs (H&N{sub 4PTV})—to test the efficacy. The plans were optimized using an alternating optimization strategy. The plans were compared against the clinical VMAT (clnVMAT) plans utilizing two overlapping coplanar arcs for treatment. Results: The optimization of the comVMAT plans had converged within 600 iterations of the block minimization algorithm. comVMAT plans were able to consistently reduce the dose to all organs-at-risk (OARs) as compared to the clnVMAT plans. On average, comVMAT plans reduced the max and mean OAR dose by 6.59% and 7.45%, respectively, of the prescription dose. Reductions in max dose and mean dose were as high as 14.5 Gy in the LNG case and 15.3 Gy in the H&N{sub 3PTV} case. PTV coverages measured by D95, D98, and D99 were within 0.25% of the prescription dose. By comprehensively optimizing all beams, the comVMAT optimizer gained the freedom to allow some selected beams to deliver higher intensities, yielding a dose distribution that resembles a static beam IMRT plan with beam orientation optimization. Conclusions: The novel nongreedy VMAT approach simultaneously optimizes all beams in an arc and then directly generates deliverable apertures. The single arc VMAT approach thus fully utilizes the digital Linac’s capability in dose rate and gantry rotation speed modulation. In practice, the new single VMAT algorithm generates plans superior to existing VMAT algorithms utilizing two arcs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, D; Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, Toronto, Ontario; Safigholi, H
2015-06-15
Purpose: To evaluate the impact of using gold wires to differentially fill various channels on plan quality compared with conventional T&R applicator, inside a novel directional modulated brachytherapy (DMBT) tandem applicator for cervical cancer brachytherapy. Materials and Methods: The novel DMBT tandem applicator has a 5.4-mm diameter MR-compatible tungsten alloy enclosed in a 0.3-mm thick plastic tubing that wraps around the tandem. To modulate the radiation intensity, 6 symmetric peripheral holes of 1.3-mm diameter are grooved along the tungsten alloy rod. These grooved holes are differentially filled with gold wires to generate various degrees of directional beams. For example, threemore » different fill patterns of 1) all void, 2) all filled except the hole containing the 192-Ir source, and 3) two adjacent holes to the 192-Ir source filled were Monte Carlo simulated. The resulting 3D dose distributions were imported into an in-house-coded inverse optimization planning system to generate HDR brachytherapy clinical plans for 19 patient cases. All plans generated were normalized to the same D90 as the clinical plans and D2cc doses of OARs were evaluated. Prescription ranged between 15 and 17.5Gy. Results: In general, the plans in case 1) resulted in the highest D2cc doses for the OARs with 11.65±2.30Gy, 7.47±3.05Gy, and 9.84±2.48Gy for bladder, rectum, and sigmoid, respectively, although the differences were small. For the case 2), D2cc doses were 11.61±2.29Gy, 7.41±3.07Gy, and 9.75±2.45Gy, respectively. And, for the case 3), D2cc doses were 11.60±2.28Gy, 7.41±3.05Gy, and 9.74±2.45Gy, respectively. Difference between 1) and 2) cases were small with the average D2cc difference of <0.64%. Difference between 1) and 3) cases were even smaller with the average D2cc difference of <0.1%. Conclusions: There is a minimal clinical benefit by differentially filling grooved holes in the novel DMBT tandem applicator for image guided cervical cancer brachytherapy.« less
NASA Astrophysics Data System (ADS)
Kurz, Christopher; Landry, Guillaume; Resch, Andreas F.; Dedes, George; Kamp, Florian; Ganswindt, Ute; Belka, Claus; Raaymakers, Bas W.; Parodi, Katia
2017-11-01
Combining magnetic-resonance imaging (MRI) and proton therapy (PT) using pencil-beam scanning (PBS) may improve image-guided radiotherapy. We aimed at assessing the impact of a magnetic field on PBS-PT plan quality and robustness. Specifically, the robustness against anatomical changes and positioning errors in an MRI-guided scenario with a 30 cm radius 1.5 T magnetic field was studied for prostate PT. Five prostate cancer patients with three consecutive CT images (CT1-3) were considered. Single-field uniform dose PBS-PT plans were generated on the segmented CT1 with Monte-Carlo-based treatment planning software for inverse optimization. Plans were optimized at 90° gantry angle without B-field (no B), with ±1.5 T B-field (B and minus B), as well as at 81° gantry angle and +1.5 T (B G81). Plans were re-calculated on aligned CT2 and CT3 to study the impact of anatomical changes. Dose distributions were compared in terms of changes in DVH parameters, proton range and gamma-index pass-rates. To assess the impact of positioning errors, DVH parameters were compared for ±5 mm CT1 patient shifts in anterior-posterior (AP) and left-right (LR) direction. Proton beam deflection considerably reduced robustness against inter-fractional changes for the B scenario. Range agreement, gamma-index pass-rates and PTV V95% were significantly lower compared to no B. Improved robustness was obtained for minus B and B G81, the latter showing only minor differences to no B. The magnetic field introduced slight dosimetric changes under LR shifts. The impact of AP shifts was considerably larger, and equivalent for scenarios with and without B-field. Results suggest that robustness equivalent to PT without magnetic field can be achieved by adaptation of the treatment parameters, such as B-field orientation (minus B) with respect to the patient and/or gantry angle (B G81). MRI-guided PT for prostate cancer might thus be implemented without compromising robustness compared to state-of-the-art CT-guided PT.
Zhang, Rubo; Yang, Yu
2017-01-01
Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution. PMID:29186166
Li, Jianjun; Zhang, Rubo; Yang, Yu
2017-01-01
Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution.
Particle Swarm Optimization for inverse modeling of solute transport in fractured gneiss aquifer
NASA Astrophysics Data System (ADS)
Abdelaziz, Ramadan; Zambrano-Bigiarini, Mauricio
2014-08-01
Particle Swarm Optimization (PSO) has received considerable attention as a global optimization technique from scientists of different disciplines around the world. In this article, we illustrate how to use PSO for inverse modeling of a coupled flow and transport groundwater model (MODFLOW2005-MT3DMS) in a fractured gneiss aquifer. In particular, the hydroPSO R package is used as optimization engine, because it has been specifically designed to calibrate environmental, hydrological and hydrogeological models. In addition, hydroPSO implements the latest Standard Particle Swarm Optimization algorithm (SPSO-2011), with an adaptive random topology and rotational invariance constituting the main advancements over previous PSO versions. A tracer test conducted in the experimental field at TU Bergakademie Freiberg (Germany) is used as case study. A double-porosity approach is used to simulate the solute transport in the fractured Gneiss aquifer. Tracer concentrations obtained with hydroPSO were in good agreement with its corresponding observations, as measured by a high value of the coefficient of determination and a low sum of squared residuals. Several graphical outputs automatically generated by hydroPSO provided useful insights to assess the quality of the calibration results. It was found that hydroPSO required a small number of model runs to reach the region of the global optimum, and it proved to be both an effective and efficient optimization technique to calibrate the movement of solute transport over time in a fractured aquifer. In addition, the parallel feature of hydroPSO allowed to reduce the total computation time used in the inverse modeling process up to an eighth of the total time required without using that feature. This work provides a first attempt to demonstrate the capability and versatility of hydroPSO to work as an optimizer of a coupled flow and transport model for contaminant migration.
van der Kruk, E; Schwab, A L; van der Helm, F C T; Veeger, H E J
2018-03-01
In gait studies body pose reconstruction (BPR) techniques have been widely explored, but no previous protocols have been developed for speed skating, while the peculiarities of the skating posture and technique do not automatically allow for the transfer of the results of those explorations to kinematic skating data. The aim of this paper is to determine the best procedure for body pose reconstruction and inverse dynamics of speed skating, and to what extend this choice influences the estimation of joint power. The results show that an eight body segment model together with a global optimization method with revolute joint in the knee and in the lumbosacral joint, while keeping the other joints spherical, would be the most realistic model to use for the inverse kinematics in speed skating. To determine joint power, this method should be combined with a least-square error method for the inverse dynamics. Reporting on the BPR technique and the inverse dynamic method is crucial to enable comparison between studies. Our data showed an underestimation of up to 74% in mean joint power when no optimization procedure was applied for BPR and an underestimation of up to 31% in mean joint power when a bottom-up inverse dynamics method was chosen instead of a least square error approach. Although these results are aimed at speed skating, reporting on the BPR procedure and the inverse dynamics method, together with setting a golden standard should be common practice in all human movement research to allow comparison between studies. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yarmand, Hamed; Winey, Brian; Craft, David
2013-09-01
Stereotactic body radiation therapy (SBRT) is characterized by delivering a high amount of dose in a short period of time. In SBRT the dose is delivered using open fields (e.g., beam’s-eye-view) known as ‘apertures’. Mathematical methods can be used for optimizing treatment planning for delivery of sufficient dose to the cancerous cells while keeping the dose to surrounding organs at risk (OARs) minimal. Two important elements of a treatment plan are quality and delivery time. Quality of a plan is measured based on the target coverage and dose to OARs. Delivery time heavily depends on the number of beams used in the plan as the setup times for different beam directions constitute a large portion of the delivery time. Therefore the ideal plan, in which all potential beams can be used, will be associated with a long impractical delivery time. We use the dose to OARs in the ideal plan to find the plan with the minimum number of beams which is guaranteed to be epsilon-optimal (i.e., a predetermined maximum deviation from the ideal plan is guaranteed). Since the treatment plan optimization is inherently a multi-criteria-optimization problem, the planner can navigate the ideal dose distribution Pareto surface and select a plan of desired target coverage versus OARs sparing, and then use the proposed technique to reduce the number of beams while guaranteeing epsilon-optimality. We use mixed integer programming (MIP) for optimization. To reduce the computation time for the resultant MIP, we use two heuristics: a beam elimination scheme and a family of heuristic cuts, known as ‘neighbor cuts’, based on the concept of ‘adjacent beams’. We show the effectiveness of the proposed technique on two clinical cases, a liver and a lung case. Based on our technique we propose an algorithm for fast generation of epsilon-optimal plans.
Action understanding as inverse planning.
Baker, Chris L; Saxe, Rebecca; Tenenbaum, Joshua B
2009-12-01
Humans are adept at inferring the mental states underlying other agents' actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theory of intentional agents' behavior based on the principle of rationality: the expectation that agents will plan approximately rationally to achieve their goals, given their beliefs about the world. The mental states that caused an agent's behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of the observed actions with the prior over mental states. This approach formalizes in precise probabilistic terms the essence of previous qualitative approaches to action understanding based on an "intentional stance" [Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press] or a "teleological stance" [Gergely, G., Nádasdy, Z., Csibra, G., & Biró, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 165-193]. In three psychophysical experiments using animated stimuli of agents moving in simple mazes, we assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within our simplified stimulus paradigm, and for the flexible nature of goal representations that human observers can adopt. We discuss the implications of our experimental results for human action understanding in real-world contexts, and suggest how our framework might be extended to capture other kinds of mental state inferences, such as inferences about beliefs, or inferring whether an entity is an intentional agent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donovan, Ellen M., E-mail: ellen.donovan@icr.ac.u; Ciurlionis, Laura; Fairfoul, Jamie
Purpose: To establish planning solutions for a concomitant three-level radiation dose distribution to the breast using linear accelerator- or tomotherapy-based intensity-modulated radiotherapy (IMRT), for the U.K. Intensity Modulated and Partial Organ (IMPORT) High trial. Methods and Materials: Computed tomography data sets for 9 patients undergoing breast conservation surgery with implanted tumor bed gold markers were used to prepare three-level dose distributions encompassing the whole breast (36 Gy), partial breast (40 Gy), and tumor bed boost (48 or 53 Gy) treated concomitantly in 15 fractions within 3 weeks. Forward and inverse planned IMRT and tomotherapy were investigated as solutions. A standardmore » electron field was compared with a photon field arrangement encompassing the tumor bed boost volume. The out-of-field doses were measured for all methods. Results: Dose-volume constraints of volume >90% receiving 32.4 Gy and volume >95% receiving 50.4 Gy for the whole breast and tumor bed were achieved. The constraint of volume >90% receiving 36 Gy for the partial breast was fulfilled in the inverse IMRT and tomotherapy plans and in 7 of 9 cases of a forward planned IMRT distribution. An electron boost to the tumor bed was inadequate in 8 of 9 cases. The IMRT methods delivered a greater whole body dose than the standard breast tangents. A contralateral lung volume >2.5 Gy was increased in the inverse IMRT and tomotherapy plans, although it did not exceed the constraint. Conclusion: We have demonstrated a set of widely applicable solutions that fulfilled the stringent clinical trial requirements for the delivery of a concomitant three-level dose distribution to the breast.« less
Poster - 34: Clinical Implementation of Prone Breast Treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Runqing; Fleming, Katrina; Kobeleva, Sofya
2016-08-15
Purpose: Prone breast treatment is used to reduce acute and late toxicities for large or pendulous breast patients. This study developed and implemented the clinical workflow of prone breast radiotherapy treatment. Methods: Varian kVue Access360™ Prone Breast Couchtop was used as prone breast board. The treatment planning (TP)is performed in Eclipse TP system. TP comparisons between supine deep inspiration breathing hold (DIBH) and prone breast; prone forward field-in-field (FinF) planning and inverse IMRT planning were performed and discussed. For the daily setup, breast coverage was assessed in the room using light field and MV imaging was used at day 1more » and weekly. Results: The first ten patients are CT scanned and planned both supine and prone. The coverage was all excellent for supine DIBH plan and prone breast plan. The plan in the prone position demonstrated improvements in lung sparing comparing to the DIBH plan. Both forward FinF plan and inverse IMRT plan achieved acceptable coverage of the breast, and heart dose is comparable. Considering the daily setup variations and MLC leakage, forward FinF plan was recommended for routine clinical use. The procedure has been tested in phantom and patients were treated clinically. Conclusions: Prone breast irradiation has been advocated for women with large pendulous breasts in order to decrease acute and late toxicities. The workflow for prone breast radiation therapy has been developed and the technique is ready to treat patients.« less
NASA Astrophysics Data System (ADS)
Manzanares-Filho, N.; Albuquerque, R. B. F.; Sousa, B. S.; Santos, L. G. C.
2018-06-01
This article presents a comparative study of some versions of the controlled random search algorithm (CRSA) in global optimization problems. The basic CRSA, originally proposed by Price in 1977 and improved by Ali et al. in 1997, is taken as a starting point. Then, some new modifications are proposed to improve the efficiency and reliability of this global optimization technique. The performance of the algorithms is assessed using traditional benchmark test problems commonly invoked in the literature. This comparative study points out the key features of the modified algorithm. Finally, a comparison is also made in a practical engineering application, namely the inverse aerofoil shape design.
Optimization method for an evolutional type inverse heat conduction problem
NASA Astrophysics Data System (ADS)
Deng, Zui-Cha; Yu, Jian-Ning; Yang, Liu
2008-01-01
This paper deals with the determination of a pair (q, u) in the heat conduction equation u_t-u_{xx}+q(x,t)u=0, with initial and boundary conditions u(x,0)=u_0(x),\\qquad u_x|_{x=0}=u_x|_{x=1}=0, from the overspecified data u(x, t) = g(x, t). By the time semi-discrete scheme, the problem is transformed into a sequence of inverse problems in which the unknown coefficients are purely space dependent. Based on the optimal control framework, the existence, uniqueness and stability of the solution (q, u) are proved. A necessary condition which is a couple system of a parabolic equation and parabolic variational inequality is deduced.
Inverse modeling and animation of growing single-stemmed trees at interactive rates
S. Rudnick; L. Linsen; E.G. McPherson
2007-01-01
For city planning purposes, animations of growing trees of several species can be used to deduce which species may best fit a particular environment. The models used for the animation must conform to real measured data. We present an approach for inverse modeling to fit global growth parameters. The model comprises local production rules, which are iteratively and...
Coded excitation with spectrum inversion (CEXSI) for ultrasound array imaging.
Wang, Yao; Metzger, Kurt; Stephens, Douglas N; Williams, Gregory; Brownlie, Scott; O'Donnell, Matthew
2003-07-01
In this paper, a scheme called coded excitation with spectrum inversion (CEXSI) is presented. An established optimal binary code whose spectrum has no nulls and possesses the least variation is encoded as a burst for transmission. Using this optimal code, the decoding filter can be derived directly from its inverse spectrum. Various transmission techniques can be used to improve energy coupling within the system pass-band. We demonstrate its potential to achieve excellent decoding with very low (< 80 dB) side-lobes. For a 2.6 micros code, an array element with a center frequency of 10 MHz and fractional bandwidth of 38%, range side-lobes of about 40 dB have been achieved experimentally with little compromise in range resolution. The signal-to-noise ratio (SNR) improvement also has been characterized at about 14 dB. Along with simulations and experimental data, we present a formulation of the scheme, according to which CEXSI can be extended to improve SNR in sparse array imaging in general.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Eva Sau Fan; Department of Health Technology and Informatics, The Hong Kong Polytechnic University; Wu, Vincent Wing Cheung
Long planning time in volumetric-modulated arc stereotactic radiotherapy (VMA-SRT) cases can limit its clinical efficiency and use. A vector model could retrieve previously successful radiotherapy cases that share various common anatomic features with the current case. The prsent study aimed to develop a vector model that could reduce planning time by applying the optimization parameters from those retrieved reference cases. Thirty-six VMA-SRT cases of brain metastasis (gender, male [n = 23], female [n = 13]; age range, 32 to 81 years old) were collected and used as a reference database. Another 10 VMA-SRT cases were planned with both conventional optimization and vector-model-supported optimization, followingmore » the oncologists' clinical dose prescriptions. Planning time and plan quality measures were compared using the 2-sided paired Wilcoxon signed rank test with a significance level of 0.05, with positive false discovery rate (pFDR) of less than 0.05. With vector-model-supported optimization, there was a significant reduction in the median planning time, a 40% reduction from 3.7 to 2.2 hours (p = 0.002, pFDR = 0.032), and for the number of iterations, a 30% reduction from 8.5 to 6.0 (p = 0.006, pFDR = 0.047). The quality of plans from both approaches was comparable. From these preliminary results, vector-model-supported optimization can expedite the optimization of VMA-SRT for brain metastasis while maintaining plan quality.« less
Ko, Young Eun; Cho, Byungchul; Kim, Su Ssan; Song, Si Yeol; Choi, Eun Kyung; Ahn, Seung Do; Yi, Byongyong
2016-01-01
Purpose To develop a simplified volumetric modulated arc therapy (VMAT) technique for more accurate dose delivery in thoracic stereotactic body radiation therapy (SBRT). Methods and Materials For each of the 22 lung SBRT cases treated with respiratory-gated VMAT, a dose rate modulated arc therapy (DrMAT) plan was retrospectively generated. A dynamic conformal arc therapy plan with 33 adjoining coplanar arcs was designed and their beam weights were optimized by an inverse planning process. All sub-arc beams were converted into a series of control points with varying MLC segment and dose rates and merged into an arc beam for a DrMAT plan. The plan quality of original VMAT and DrMAT was compared in terms of target coverage, compactness of dose distribution, and dose sparing of organs at risk. To assess the delivery accuracy, the VMAT and DrMAT plans were delivered to a motion phantom programmed with the corresponding patients’ respiratory signal; results were compared using film dosimetry with gamma analysis. Results The plan quality of DrMAT was equivalent to that of VMAT in terms of target coverage, dose compactness, and dose sparing for the normal lung. In dose sparing for other critical organs, DrMAT was less effective than VMAT for the spinal cord, heart, and esophagus while being well within the limits specified by the Radiation Therapy Oncology Group. Delivery accuracy of DrMAT to a moving target was similar to that of VMAT using a gamma criterion of 2%/2mm but was significantly better using a 2%/1mm criterion, implying the superiority of DrMAT over VMAT in SBRT for thoracic/abdominal tumors with respiratory movement. Conclusion We developed a DrMAT technique for SBRT that produces plans of a quality similar to that achieved with VMAT but with better delivery accuracy. This technique is well-suited for small tumors with motion uncertainty. PMID:27333199
Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis
ERIC Educational Resources Information Center
Marin-Martinez, Fulgencio; Sanchez-Meca, Julio
2010-01-01
Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…
Cooperative optimization of reconfigurable machine tool configurations and production process plan
NASA Astrophysics Data System (ADS)
Xie, Nan; Li, Aiping; Xue, Wei
2012-09-01
The production process plan design and configurations of reconfigurable machine tool (RMT) interact with each other. Reasonable process plans with suitable configurations of RMT help to improve product quality and reduce production cost. Therefore, a cooperative strategy is needed to concurrently solve the above issue. In this paper, the cooperative optimization model for RMT configurations and production process plan is presented. Its objectives take into account both impacts of process and configuration. Moreover, a novel genetic algorithm is also developed to provide optimal or near-optimal solutions: firstly, its chromosome is redesigned which is composed of three parts, operations, process plan and configurations of RMTs, respectively; secondly, its new selection, crossover and mutation operators are also developed to deal with the process constraints from operation processes (OP) graph, otherwise these operators could generate illegal solutions violating the limits; eventually the optimal configurations for RMT under optimal process plan design can be obtained. At last, a manufacturing line case is applied which is composed of three RMTs. It is shown from the case that the optimal process plan and configurations of RMT are concurrently obtained, and the production cost decreases 6.28% and nonmonetary performance increases 22%. The proposed method can figure out both RMT configurations and production process, improve production capacity, functions and equipment utilization for RMT.
Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm
Chen, C.; Xia, J.; Liu, J.; Feng, G.
2006-01-01
Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant result is that final solution is determined by the average model derived from multiple trials instead of one computation due to the randomness in a genetic algorithm procedure. These advantages were demonstrated by synthetic and real-world examples of inversion of potential-field data. ?? 2005 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Zhiming; Feng, Yuncheng
1988-08-01
This paper describes an algorithmic structure for combining simulation and optimization techniques both in theory and practice. Response surface methodology is used to optimize the decision variables in the simulation environment. A simulation-optimization software has been developed and successfully implemented, and its application to an aggregate production planning simulation-optimization model is reported. The model's objective is to minimize the production cost and to generate an optimal production plan and inventory control strategy for an aircraft factory.
Zhang, Yichuan; Wang, Jiangping
2015-07-01
Rivers serve as a highly valued component in ecosystem and urban infrastructures. River planning should follow basic principles of maintaining or reconstructing the natural landscape and ecological functions of rivers. Optimization of planning scheme is a prerequisite for successful construction of urban rivers. Therefore, relevant studies on optimization of scheme for natural ecology planning of rivers is crucial. In the present study, four planning schemes for Zhaodingpal River in Xinxiang City, Henan Province were included as the objects for optimization. Fourteen factors that influenced the natural ecology planning of urban rivers were selected from five aspects so as to establish the ANP model. The data processing was done using Super Decisions software. The results showed that important degree of scheme 3 was highest. A scientific, reasonable and accurate evaluation of schemes could be made by ANP method on natural ecology planning of urban rivers. This method could be used to provide references for sustainable development and construction of urban rivers. ANP method is also suitable for optimization of schemes for urban green space planning and design.
A three-dimensional inverse finite element analysis of the heel pad.
Chokhandre, Snehal; Halloran, Jason P; van den Bogert, Antonie J; Erdemir, Ahmet
2012-03-01
Quantification of plantar tissue behavior of the heel pad is essential in developing computational models for predictive analysis of preventive treatment options such as footwear for patients with diabetes. Simulation based studies in the past have generally adopted heel pad properties from the literature, in return using heel-specific geometry with material properties of a different heel. In exceptional cases, patient-specific material characterization was performed with simplified two-dimensional models, without further evaluation of a heel-specific response under different loading conditions. The aim of this study was to conduct an inverse finite element analysis of the heel in order to calculate heel-specific material properties in situ. Multidimensional experimental data available from a previous cadaver study by Erdemir et al. ("An Elaborate Data Set Characterizing the Mechanical Response of the Foot," ASME J. Biomech. Eng., 131(9), pp. 094502) was used for model development, optimization, and evaluation of material properties. A specimen-specific three-dimensional finite element representation was developed. Heel pad material properties were determined using inverse finite element analysis by fitting the model behavior to the experimental data. Compression dominant loading, applied using a spherical indenter, was used for optimization of the material properties. The optimized material properties were evaluated through simulations representative of a combined loading scenario (compression and anterior-posterior shear) with a spherical indenter and also of a compression dominant loading applied using an elevated platform. Optimized heel pad material coefficients were 0.001084 MPa (μ), 9.780 (α) (with an effective Poisson's ratio (ν) of 0.475), for a first-order nearly incompressible Ogden material model. The model predicted structural response of the heel pad was in good agreement for both the optimization (<1.05% maximum tool force, 0.9% maximum tool displacement) and validation cases (6.5% maximum tool force, 15% maximum tool displacement). The inverse analysis successfully predicted the material properties for the given specimen-specific heel pad using the experimental data for the specimen. The modeling framework and results can be used for accurate predictions of the three-dimensional interaction of the heel pad with its surroundings.
Numerical convergence and validation of the DIMP inverse particle transport model
Nelson, Noel; Azmy, Yousry
2017-09-01
The data integration with modeled predictions (DIMP) model is a promising inverse radiation transport method for solving the special nuclear material (SNM) holdup problem. Unlike previous methods, DIMP is a completely passive nondestructive assay technique that requires no initial assumptions regarding the source distribution or active measurement time. DIMP predicts the most probable source location and distribution through Bayesian inference and quasi-Newtonian optimization of predicted detector re-sponses (using the adjoint transport solution) with measured responses. DIMP performs well with for-ward hemispherical collimation and unshielded measurements, but several considerations are required when using narrow-view collimated detectors. DIMP converged well to themore » correct source distribution as the number of synthetic responses increased. DIMP also performed well for the first experimental validation exercise after applying a collimation factor, and sufficiently reducing the source search vol-ume's extent to prevent the optimizer from getting stuck in local minima. DIMP's simple point detector response function (DRF) is being improved to address coplanar false positive/negative responses, and an angular DRF is being considered for integration with the next version of DIMP to account for highly collimated responses. Overall, DIMP shows promise for solving the SNM holdup inverse problem, especially once an improved optimization algorithm is implemented.« less
Jahng, Geon-Ho; Jin, Wook; Yang, Dal Mo; Ryu, Kyung Nam
2011-05-01
We wanted to optimize a double inversion recovery (DIR) sequence to image joint effusion regions of the knee, especially intracapsular or intrasynovial imaging in the suprapatellar bursa and patellofemoral joint space. Computer simulations were performed to determine the optimum inversion times (TI) for suppressing both fat and water signals, and a DIR sequence was optimized based on the simulations for distinguishing synovitis from fluid. In vivo studies were also performed on individuals who showed joint effusion on routine knee MR images to demonstrate the feasibility of using the DIR sequence with a 3T whole-body MR scanner. To compare intracapsular or intrasynovial signals on the DIR images, intermediate density-weighted images and/or post-enhanced T1-weighted images were acquired. The timings to enhance the synovial contrast from the fluid components were TI1 = 2830 ms and TI2 = 254 ms for suppressing the water and fat signals, respectively. Improved contrast for the intrasynovial area in the knees was observed with the DIR turbo spin-echo pulse sequence compared to the intermediate density-weighted sequence. Imaging contrast obtained noninvasively with the DIR sequence was similar to that of the post-enhanced T1-weighted sequence. The DIR sequence may be useful for delineating synovium without using contrast materials.
[Analysis of visible extinction spectrum of particle system and selection of optimal wavelength].
Sun, Xiao-gang; Tang, Hong; Yuan, Gui-bin
2008-09-01
In the total light scattering particle sizing technique, the extinction spectrum of particle system contains some information about the particle size and refractive index. The visible extinction spectra of the common monomodal and biomodal R-R particle size distribution were computed, and the variation in the visible extinction spectrum with the particle size and refractive index was analyzed. The corresponding wavelengths were selected as the measurement wavelengths at which the second order differential extinction spectrum was discontinuous. Furthermore, the minimum and the maximum wavelengths in the visible region were also selected as the measurement wavelengths. The genetic algorithm was used as the inversion method under the dependent model The computer simulation and experiments illustrate that it is feasible to make an analysis of the extinction spectrum and use this selection method of the optimal wavelength in the total light scattering particle sizing. The rough contour of the particle size distribution can be determined after the analysis of visible extinction spectrum, so the search range of the particle size parameter is reduced in the optimal algorithm, and then a more accurate inversion result can be obtained using the selection method. The inversion results of monomodal and biomodal distribution are all still satisfactory when 1% stochastic noise is put in the transmission extinction measurement values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beltran, C; Kamal, H
Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatmentmore » planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.« less
Highly-optimized TWSM software package for seismic diffraction modeling adapted for GPU-cluster
NASA Astrophysics Data System (ADS)
Zyatkov, Nikolay; Ayzenberg, Alena; Aizenberg, Arkady
2015-04-01
Oil producing companies concern to increase resolution capability of seismic data for complex oil-and-gas bearing deposits connected with salt domes, basalt traps, reefs, lenses, etc. Known methods of seismic wave theory define shape of hydrocarbon accumulation with nonsufficient resolution, since they do not account for multiple diffractions explicitly. We elaborate alternative seismic wave theory in terms of operators of propagation in layers and reflection-transmission at curved interfaces. Approximation of this theory is realized in the seismic frequency range as the Tip-Wave Superposition Method (TWSM). TWSM based on the operator theory allows to evaluate of wavefield in bounded domains/layers with geometrical shadow zones (in nature it can be: salt domes, basalt traps, reefs, lenses, etc.) accounting for so-called cascade diffraction. Cascade diffraction includes edge waves from sharp edges, creeping waves near concave parts of interfaces, waves of the whispering galleries near convex parts of interfaces, etc. The basic algorithm of TWSM package is based on multiplication of large-size matrices (make hundreds of terabytes in size). We use advanced information technologies for effective realization of numerical procedures of the TWSM. In particular, we actively use NVIDIA CUDA technology and GPU accelerators allowing to significantly improve the performance of the TWSM software package, that is important in using it for direct and inverse problems. The accuracy, stability and efficiency of the algorithm are justified by numerical examples with curved interfaces. TWSM package and its separate components can be used in different modeling tasks such as planning of acquisition systems, physical interpretation of laboratory modeling, modeling of individual waves of different types and in some inverse tasks such as imaging in case of laterally inhomogeneous overburden, AVO inversion.
Dynamic motion planning of 3D human locomotion using gradient-based optimization.
Kim, Hyung Joo; Wang, Qian; Rahmatalla, Salam; Swan, Colby C; Arora, Jasbir S; Abdel-Malek, Karim; Assouline, Jose G
2008-06-01
Since humans can walk with an infinite variety of postures and limb movements, there is no unique solution to the modeling problem to predict human gait motions. Accordingly, we test herein the hypothesis that the redundancy of human walking mechanisms makes solving for human joint profiles and force time histories an indeterminate problem best solved by inverse dynamics and optimization methods. A new optimization-based human-modeling framework is thus described for predicting three-dimensional human gait motions on level and inclined planes. The basic unknowns in the framework are the joint motion time histories of a 25-degree-of-freedom human model and its six global degrees of freedom. The joint motion histories are calculated by minimizing an objective function such as deviation of the trunk from upright posture that relates to the human model's performance. A variety of important constraints are imposed on the optimization problem, including (1) satisfaction of dynamic equilibrium equations by requiring the model's zero moment point (ZMP) to lie within the instantaneous geometrical base of support, (2) foot collision avoidance, (3) limits on ground-foot friction, and (4) vanishing yawing moment. Analytical forms of objective and constraint functions are presented and discussed for the proposed human-modeling framework in which the resulting optimization problems are solved using gradient-based mathematical programming techniques. When the framework is applied to the modeling of bipedal locomotion on level and inclined planes, acyclic human walking motions that are smooth and realistic as opposed to less natural robotic motions are obtained. The aspects of the modeling framework requiring further investigation and refinement, as well as potential applications of the framework in biomechanics, are discussed.
Optimization of contrast resolution by genetic algorithm in ultrasound tissue harmonic imaging.
Ménigot, Sébastien; Girault, Jean-Marc
2016-09-01
The development of ultrasound imaging techniques such as pulse inversion has improved tissue harmonic imaging. Nevertheless, no recommendation has been made to date for the design of the waveform transmitted through the medium being explored. Our aim was therefore to find automatically the optimal "imaging" wave which maximized the contrast resolution without a priori information. To overcome assumption regarding the waveform, a genetic algorithm investigated the medium thanks to the transmission of stochastic "explorer" waves. Moreover, these stochastic signals could be constrained by the type of generator available (bipolar or arbitrary). To implement it, we changed the current pulse inversion imaging system by including feedback. Thus the method optimized the contrast resolution by adaptively selecting the samples of the excitation. In simulation, we benchmarked the contrast effectiveness of the best found transmitted stochastic commands and the usual fixed-frequency command. The optimization method converged quickly after around 300 iterations in the same optimal area. These results were confirmed experimentally. In the experimental case, the contrast resolution measured on a radiofrequency line could be improved by 6% with a bipolar generator and it could still increase by 15% with an arbitrary waveform generator. Copyright © 2016 Elsevier B.V. All rights reserved.
SU-E-J-193: Feasibility of MRI-Only Based IMRT Planning for Pancreatic Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prior, P; Botros, M; Chen, X
2014-06-01
Purpose: With the increasing use of MRI simulation and the advent of MRI-guided delivery, it is desirable to use MRI only for treatment planning. In this study, we assess the dosimetric difference between MRI- and CTbased IMRT planning for pancreatic cancer. Methods: Planning CTs and MRIs acquired for a representative pancreatic cancer patient were used. MRI-based planning utilized forced relative electron density (rED) assignment of organ specific values from IRCU report 46, where rED = 1.029 for PTV and a rED = 1.036 for non-specified tissue (NST). Six IMRT plans were generated with clinical dose-volume (DV) constraints using a researchmore » Monaco planning system employing Monte Carlo dose calculation with optional perpendicular magnetic field (MF) of 1.5T. The following five plans were generated and compared with the planning CT: 1.) CT plan with MF and dose recalculation without optimization; 2.) MRI (T2) plan with target and OARs redrawn based on MRI, forced rED, no MF, and recalculation without optimization; 3.) Similar as in 2 but with MF; 4.) MRI plan with MF but without optimization; and 5.) Similar as in 4 but with optimization. Results: Generally, noticeable differences in PTV point doses and DV parameters (DVPs) between the CT-and MRI-based plans with and without the MF were observed. These differences between the optimized plans were generally small, mostly within 2%. Larger differences were observed in point doses and mean doses for certain OARs between the CT and MRI plan, mostly due to differences between image acquisition times. Conclusion: MRI only based IMRT planning for pancreatic cancer is feasible. The differences observed between the optimized CT and MRI plans with or without the MF were practically negligible if excluding the differences between MRI and CT defined structures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tabibian, A; Kim, A; Rose, J
Purpose: A novel optimization technique was developed for field-in-field (FIF) chestwall radiotherapy using bolus every other day. The dosimetry was compared to currently used optimization. Methods: The prior five patients treated at our clinic to the chestwall and supraclavicular nodes with a mono-isocentric four-field arrangement were selected for this study. The prescription was 5040 cGy in 28 fractions, 5 mm bolus every other day on the tangent fields, 6 and/or 10 MV x-rays, and multileaf collimation.Novelly, tangents FIF segments were forward planned optimized based on the composite bolus and non-bolus dose distribution simultaneously. The prescription was spilt into 14 fractionsmore » for both bolus and non-bolus tangents. The same segments and monitor units were used for the bolus and non-bolus treatment. The plan was optimized until the desired coverage was achieved, minimized 105% hotspots, and a maximum dose of less than 108%. Each tangential field had less than 5 segments.Comparison plans were generated using FIF optimization with the same dosimetric goals, but using only the non-bolus calculation for FIF optimization. The non-bolus fields were then copied and bolus was applied. The same segments and monitor units were used for the bolus and non-bolus segments. Results: The prescription coverage of the chestwall, as defined by RTOG guidelines, was on average 51.8% for the plans that optimized bolus and non-bolus treatments simultaneous (SB) and 43.8% for the plans optimized to the non-bolus treatments (NB). Chestwall coverage of 90% prescription averaged to 80.4% for SB and 79.6% for NB plans. The volume receiving 105% of the prescription was 1.9% for SB and 0.8% for NB plans on average. Conclusion: Simultaneously optimizing for bolus and non-bolus treatments noticeably improves prescription coverage of the chestwall while maintaining similar hotspots and 90% prescription coverage in comparison to optimizing only to non-bolus treatments.« less
3D CSEM data inversion using Newton and Halley class methods
NASA Astrophysics Data System (ADS)
Amaya, M.; Hansen, K. R.; Morten, J. P.
2016-05-01
For the first time in 3D controlled source electromagnetic data inversion, we explore the use of the Newton and the Halley optimization methods, which may show their potential when the cost function has a complex topology. The inversion is formulated as a constrained nonlinear least-squares problem which is solved by iterative optimization. These methods require the derivatives up to second order of the residuals with respect to model parameters. We show how Green's functions determine the high-order derivatives, and develop a diagrammatical representation of the residual derivatives. The Green's functions are efficiently calculated on-the-fly, making use of a finite-difference frequency-domain forward modelling code based on a multi-frontal sparse direct solver. This allow us to build the second-order derivatives of the residuals keeping the memory cost in the same order as in a Gauss-Newton (GN) scheme. Model updates are computed with a trust-region based conjugate-gradient solver which does not require the computation of a stabilizer. We present inversion results for a synthetic survey and compare the GN, Newton, and super-Halley optimization schemes, and consider two different approaches to set the initial trust-region radius. Our analysis shows that the Newton and super-Halley schemes, using the same regularization configuration, add significant information to the inversion so that the convergence is reached by different paths. In our simple resistivity model examples, the convergence speed of the Newton and the super-Halley schemes are either similar or slightly superior with respect to the convergence speed of the GN scheme, close to the minimum of the cost function. Due to the current noise levels and other measurement inaccuracies in geophysical investigations, this advantageous behaviour is at present of low consequence, but may, with the further improvement of geophysical data acquisition, be an argument for more accurate higher-order methods like those applied in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yarmand, H; Winey, B; Craft, D
2014-06-15
Purpose: To efficiently find quality-guaranteed treatment plans with the minimum number of beams for stereotactic body radiation therapy using RayStation. Methods: For a pre-specified pool of candidate beams we use RayStation (a treatment planning software for clinical use) to identify the deliverable plan which uses all the beams with the minimum dose to organs at risk (OARs) and dose to the tumor and other structures in specified ranges. Then use the dose matrix information for the generated apertures from RayStation to solve a linear program to find the ideal plan with the same objective and constraints allowing use of allmore » beams. Finally we solve a mixed integer programming formulation of the beam angle optimization problem (BAO) with the objective of minimizing the number of beams while remaining in a predetermined epsilon-optimality of the ideal plan with respect to the dose to OARs. Since the treatment plan optimization is a multicriteria optimization problem, the planner can exploit the multicriteria optimization capability of RayStation to navigate the ideal dose distribution Pareto surface and select a plan of desired target coverage versus OARs sparing, and then use the proposed technique to reduce the number of beams while guaranteeing quality. For the numerical experiments two liver cases and one lung case with 33 non-coplanar beams are considered. Results: The ideal plan uses an impractically large number of beams. The proposed technique reduces the number of beams to the range of practical application (5 to 9 beams) while remaining in the epsilon-optimal range of 1% to 5% optimality gap. Conclusion: The proposed method can be integrated into a general algorithm for fast navigation of the ideal dose distribution Pareto surface and finding the treatment plan with the minimum number of beams, which corresponds to the delivery time, in epsilon-optimality range of the desired ideal plan. The project was supported by the Federal Share of program income earned by Massachusetts General Hospital on C06 CA059267, Proton Therapy Research and Treatment Center and partially by RaySearch Laboratories.« less
NASA Technical Reports Server (NTRS)
Abbas, M. M.; Shapiro, G. L.; Allario, F.; Alvarez, J. M.
1981-01-01
A combination of two different techniques for the inversion of infrared laser heterodyne measurements of tenuous gases in the stratosphere by solar occulation is presented which incorporates the advantages of each technique. An experimental approach and inversion technique are developed which optimize the retrieval of concentration profiles by incorporating the onion peel collection scheme into the spectral inversion technique. A description of an infrared heterodyne spectrometer and the mode of observations for solar occulation measurement is presented, and the results of inversions of some synthetic ClO spectral lines corresponding to solar occulation limb-scans of the stratosphere are examined. A comparison between the new techniques and one of the current techniques indicates that considerable improvement in the accuracy of the retrieved profiles can be achieved. It is found that noise affects the accuracy of both techniques but not in a straightforward manner since there is interaction between the noise level, noise propagation through inversion, and the number of scans leading to an optimum retrieval.
Prostate Dose Escalation by Innovative Inverse Planning-Driven IMRT
2005-11-01
Galvin, J. M.; Low, D.; Palta , J. R.; Rosen, I.; Sharpe, M. B.; Xia, P.; Xiao, Y.; Xing, L.; Yu, C. X., Guidance document on delivery, treatment planning... Palta , J., Implementing IMRT in clinical practice: ajoint document of the American Society for Therapeutic Radiology and Oncology and the American
Transition-Marking Behaviors of Adolescent Males at First Intercourse.
ERIC Educational Resources Information Center
McLean, Ann L.; Flanigan, Beverly J.
1993-01-01
Examined male transition-marking behaviors from adolescence into adulthood at first intercourse. Findings from 80 adolescent males revealed that alcohol use at first intercourse was unrelated to use of contraceptives at that time but was inversely related to whether first intercourse was planned. Planning was positively related to contraceptive…
Geoelectric Characterization of Thermal Water Aquifers Using 2.5D Inversion of VES Measurements
NASA Astrophysics Data System (ADS)
Gyulai, Á.; Szűcs, P.; Turai, E.; Baracza, M. K.; Fejes, Z.
2017-03-01
This paper presents a short theoretical summary of the series expansion-based 2.5D combined geoelectric weighted inversion (CGWI) method and highlights the advantageous way with which the number of unknowns can be decreased due to the simultaneous characteristic of this inversion. 2.5D CGWI is an approximate inversion method for the determination of 3D structures, which uses the joint 2D forward modeling of dip and strike direction data. In the inversion procedure, the Steiner's most frequent value method is applied to the automatic separation of dip and strike direction data and outliers. The workflow of inversion and its practical application are presented in the study. For conventional vertical electrical sounding (VES) measurements, this method can determine the parameters of complex structures more accurately than the single inversion method. Field data show that the 2.5D CGWI which was developed can determine the optimal location for drilling an exploratory thermal water prospecting well. The novelty of this research is that the measured VES data in dip and strike direction are jointly inverted by the 2.5D CGWI method.
SISYPHUS: A high performance seismic inversion factory
NASA Astrophysics Data System (ADS)
Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas
2016-04-01
In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with branches for the static process setup, inversion iterations, and solver runs, each branch specifying information at the event, station and channel levels. The workflow management framework is based on an embedded scripting engine that allows definition of various workflow scenarios using a high-level scripting language and provides access to all available inversion components represented as standard library functions. At present the SES3D wave propagation solver is integrated in the solution; the work is in progress for interfacing with SPECFEM3D. A separate framework is designed for interoperability with an optimization module; the workflow manager and optimization process run in parallel and cooperate by exchanging messages according to a specially designed protocol. A library of high-performance modules implementing signal pre-processing, misfit and adjoint computations according to established good practices is included. Monitoring is based on information stored in the inversion state database and at present implements a command line interface; design of a graphical user interface is in progress. The software design fits well into the common massively parallel system architecture featuring a large number of computational nodes running distributed applications under control of batch-oriented resource managers. The solution prototype has been implemented on the "Piz Daint" supercomputer provided by the Swiss Supercomputing Centre (CSCS).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumaran Nair, C; Hoffman, D; Wright, C
Purpose: We aim to evaluate a new commercial dose mimicking inverse-planning application that was designed to provide cross-platform treatment planning, for its dosimetric quality and efficiency. The clinical benefit of this application allows patients treated on O-shaped linac to receive an equivalent plan on conventional L-shaped linac as needed for workflow or machine downtime. Methods: The dose mimicking optimization process seeks to create a similar DVH of an O-shaped linac-based plans with an alternative treatment technique (IMRT or VMAT), by maintaining target conformity, and penalizing dose falloff outside the target. Ten head and neck (HN) helical delivery plans, including simplemore » and complex cases were selected for re-planning with the dose mimicking application. All plans were generated for a 6 MV beam model, using 7-field/ 9-field IMRT and VMAT techniques. PTV coverage (D1, D99 and homogeneity index [HI]), and OARs avoidance (Dmean / Dmax) were compared. Results: The resulting dose mimicked HN plans achieved acceptable PTV coverage for HI (VMAT 7.0±2.3, 7-fld 7.3±2.4, and 9-fld 7.0±2.4), D99 (98.0%±0.7%, 97.8%±0.7%, and 98.0%±0.7%), as well as D1 (106.4%±2.1%, 106.5%±2.2%, and 106.4%±2.1%), respectively. The OAR dose discrepancy varied: brainstem (2% to 4%), cord (3% to 6%), esophagus (−4% to −8%), larynx (−4% to 2%), and parotid (4% to 14%). Mimicked plans would typically be needed for 1–5 fractions of a treatment course, and we estimate <1% variance would be introduced in target coverage while maintaining comparable low dose to OARs. All mimicked plans were approved by independent physician and passed patient specific QA within our established tolerance. Conclusion: Dose mimicked plans provide a practical alternative for responding to clinical workflow issues, and provide reliability for patient treatment. The quality of dose mimicking for HN patients highly depends on the delivery technique, field numbers and angles, as well as user selection of structures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, D; Liu, Z; University of California, San Diego, La Jolla, CA
2015-06-15
Purpose: To demonstrate that utilization of a novel, intensity modulation capable, direction modulated brachytherapy (DMBT) tandem applicator can improve plan quality compared with conventional T&R applicator during an image guided cervical cancer brachytherapy. Methods: 45 cervical cancer patients treated with PDR brachytherapy were reviewed. Of them, a) 27 were treated using T&R only, b) 9 were treated using T&R with needles attached to the ring, and c) the remaining 9 were treated using T&R with needles attached to the ring (AN) as well as additional free-hand-loaded needles (FN). The DMBT tandem design has 6 peripheral holes of 1.3-mm diameter, groovedmore » along a nonmagnetic tungsten alloy rod, enclosed in a plastic sheath with total 6.0-mm diameter. An in-house-coded inverse planning system was used for planning DMBT and T&R cases. All typical clinical constraints including OAR dose limits, dwell times, and loading patterns were respected. For the DMBT and T&R applicators, the plans were optimized with the same conventional ring in place, but repeatedly planned with and without AN/FN needles. All generated plans were normalized to the same D90 of the clinically treated plans. Results: For the plans in category a), DMBT generally outperformed T&R with average reduction in D2cc of −2.39%, −5.21%, and −2.69% for bladder, rectum, and sigmoid, respectively. For the plans in category b) and c), DMBT generally outperformed T&R if the same needles in AN/FN were utilized in both cases with average reduction in D2cc of −1.82%, −3.40%, and −6.04%, respectively. For the cases where the needles were not utilized for both applicators, an average D2cc reduction of −7.45%, −7.61%, and 17.47% were observed, respectively. Conclusions: Under the same clinical conditions, with/without needles, the DMBT applicator tends to generate more favorable plans compared with the conventional T&R applicator, and hence, is a promising technology.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liengsawangwong, Raweewan; Yu, T.-K.; Sun, T.-L.
2007-11-01
Background: The purpose of this study was to determine whether the use of optimized CT treatment planning offered better coverage of axillary level III (LIII)/supraclavicular (SC) targets than the empirically derived dose prescription that are commonly used. Materials/Methods: Thirty-two consecutive breast cancer patients who underwent CT treatment planning of a SC field were evaluated. Each patient was categorized according to body mass index (BMI) classes: normal, overweight, or obese. The SC and LIII nodal beds were contoured, and four treatment plans for each patient were generated. Three of the plans used empiric dose prescriptions, and these were compared with amore » CT-optimized plan. Each plan was evaluated by two criteria: whether 98% of target volume receive >90% of prescribed dose and whether < 5% of the irradiated volume received 105% of prescribed dose. Results: The mean depth of SC and LIII were 3.2 cm (range, 1.4-6.7 cm) and 3.1 (range, 1.7-5.8 cm). The depth of these targets varied according across BMI classes (p = 0.01). Among the four sets of plans, the CT-optimized plans were the most successful at achieving both of the dosimetry objectives for every BMI class (normal BMI, p = .003; overweight BMI, p < .0001; obese BMI, p < .001). Conclusions: Across all BMI classes, routine radiation prescriptions did not optimally cover intended targets for every patient. Optimized CT-based treatment planning generated the most successful plans; therefore, we recommend the use of routine CT simulation and treatment planning of SC fields in breast cancer.« less
NASA Astrophysics Data System (ADS)
Shirzaei, M.; Walter, T. R.
2009-10-01
Modern geodetic techniques provide valuable and near real-time observations of volcanic activity. Characterizing the source of deformation based on these observations has become of major importance in related monitoring efforts. We investigate two random search approaches, simulated annealing (SA) and genetic algorithm (GA), and utilize them in an iterated manner. The iterated approach helps to prevent GA in general and SA in particular from getting trapped in local minima, and it also increases redundancy for exploring the search space. We apply a statistical competency test for estimating the confidence interval of the inversion source parameters, considering their internal interaction through the model, the effect of the model deficiency, and the observational error. Here, we present and test this new randomly iterated search and statistical competency (RISC) optimization method together with GA and SA for the modeling of data associated with volcanic deformations. Following synthetic and sensitivity tests, we apply the improved inversion techniques to two episodes of activity in the Campi Flegrei volcanic region in Italy, observed by the interferometric synthetic aperture radar technique. Inversion of these data allows derivation of deformation source parameters and their associated quality so that we can compare the two inversion methods. The RISC approach was found to be an efficient method in terms of computation time and search results and may be applied to other optimization problems in volcanic and tectonic environments.
NASA Astrophysics Data System (ADS)
Romano, F.; Lorito, S.; Piatanesi, A.; Volpe, M.; Lay, T.; Tolomei, C.; Murphy, S.; Tonini, R.; Escalante, C.; Castro, M. J.; Gonzalez-Vida, J. M.; Macias, J.
2017-12-01
The Chile subduction zone is one of the most seismically active regions in the world and it hosted a number of great tsunamigenic earthquakes in the past. In particular, during the last 7 years three M8+ earthquakes occurred nearby the Chilean coasts, that is the 2010 M8.8 Maule, the 2014 M8.1 Iquique, and the M8.3 2015 Illapel earthquakes. The rupture process of these earthquakes has been studied by using different kind of geophysical observations such as seismic, geodetic, and tsunami data; in particular, tsunami waveforms are important for constraining the slip on the offshore portion of the fault. However, it has been shown that forward modelling of tsunami data can be affected by unavailability of accurate bathymetric models, especially in the vicinity of the tide-gauges; and in the far field by water density gradients, ocean floor elasticity, or geopotential gravity changes, generally neglected. This could result in a mismatch between observed and predicted tsunami signals thus affecting the retrieved tsunami source image. Recently, a method has been proposed for automatic correction during the nonlinear inversion of the mismatch (optimal time alignment, OTA; Romano et al., GRL, 2016). Here, we present a reappraisal of the joint inversion of tsunami data with OTA procedure and geodetic data, for the Maule, Iquique, and Illapel earthquakes. We compare the results with those obtained by tsunami inversion without using OTA and with other published inversion results.
Game theory and risk-based leveed river system planning with noncooperation
NASA Astrophysics Data System (ADS)
Hui, Rui; Lund, Jay R.; Madani, Kaveh
2016-01-01
Optimal risk-based levee designs are usually developed for economic efficiency. However, in river systems with multiple levees, the planning and maintenance of different levees are controlled by different agencies or groups. For example, along many rivers, levees on opposite riverbanks constitute a simple leveed river system with each levee designed and controlled separately. Collaborative planning of the two levees can be economically optimal for the whole system. Independent and self-interested landholders on opposite riversides often are willing to separately determine their individual optimal levee plans, resulting in a less efficient leveed river system from an overall society-wide perspective (the tragedy of commons). We apply game theory to simple leveed river system planning where landholders on each riverside independently determine their optimal risk-based levee plans. Outcomes from noncooperative games are analyzed and compared with the overall economically optimal outcome, which minimizes net flood cost system-wide. The system-wide economically optimal solution generally transfers residual flood risk to the lower-valued side of the river, but is often impractical without compensating for flood risk transfer to improve outcomes for all individuals involved. Such compensation can be determined and implemented with landholders' agreements on collaboration to develop an economically optimal plan. By examining iterative multiple-shot noncooperative games with reversible and irreversible decisions, the costs of myopia for the future in making levee planning decisions show the significance of considering the externalities and evolution path of dynamic water resource problems to improve decision-making.
On the optimization of electromagnetic geophysical data: Application of the PSO algorithm
NASA Astrophysics Data System (ADS)
Godio, A.; Santilano, A.
2018-01-01
Particle Swarm optimization (PSO) algorithm resolves constrained multi-parameter problems and is suitable for simultaneous optimization of linear and nonlinear problems, with the assumption that forward modeling is based on good understanding of ill-posed problem for geophysical inversion. We apply PSO for solving the geophysical inverse problem to infer an Earth model, i.e. the electrical resistivity at depth, consistent with the observed geophysical data. The method doesn't require an initial model and can be easily constrained, according to external information for each single sounding. The optimization process to estimate the model parameters from the electromagnetic soundings focuses on the discussion of the objective function to be minimized. We discuss the possibility to introduce in the objective function vertical and lateral constraints, with an Occam-like regularization. A sensitivity analysis allowed us to check the performance of the algorithm. The reliability of the approach is tested on synthetic, real Audio-Magnetotelluric (AMT) and Long Period MT data. The method appears able to solve complex problems and allows us to estimate the a posteriori distribution of the model parameters.
Solving iTOUGH2 simulation and optimization problems using the PEST protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, S.A.; Zhang, Y.
2011-02-01
The PEST protocol has been implemented into the iTOUGH2 code, allowing the user to link any simulation program (with ASCII-based inputs and outputs) to iTOUGH2's sensitivity analysis, inverse modeling, and uncertainty quantification capabilities. These application models can be pre- or post-processors of the TOUGH2 non-isothermal multiphase flow and transport simulator, or programs that are unrelated to the TOUGH suite of codes. PEST-style template and instruction files are used, respectively, to pass input parameters updated by the iTOUGH2 optimization routines to the model, and to retrieve the model-calculated values that correspond to observable variables. We summarize the iTOUGH2 capabilities and demonstratemore » the flexibility added by the PEST protocol for the solution of a variety of simulation-optimization problems. In particular, the combination of loosely coupled and tightly integrated simulation and optimization routines provides both the flexibility and control needed to solve challenging inversion problems for the analysis of multiphase subsurface flow and transport systems.« less
Towards inverse modeling of turbidity currents: The inverse lock-exchange problem
NASA Astrophysics Data System (ADS)
Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison
2011-04-01
A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.
Investigating multi-objective fluence and beam orientation IMRT optimization
NASA Astrophysics Data System (ADS)
Potrebko, Peter S.; Fiege, Jason; Biagioli, Matthew; Poleszczuk, Jan
2017-07-01
Radiation Oncology treatment planning requires compromises to be made between clinical objectives that are invariably in conflict. It would be beneficial to have a ‘bird’s-eye-view’ perspective of the full spectrum of treatment plans that represent the possible trade-offs between delivering the intended dose to the planning target volume (PTV) while optimally sparing the organs-at-risk (OARs). In this work, the authors demonstrate Pareto-aware radiotherapy evolutionary treatment optimization (PARETO), a multi-objective tool featuring such bird’s-eye-view functionality, which optimizes fluence patterns and beam angles for intensity-modulated radiation therapy (IMRT) treatment planning. The problem of IMRT treatment plan optimization is managed as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. To achieve this, PARETO is built around a powerful multi-objective evolutionary algorithm, called Ferret, which simultaneously optimizes multiple fitness functions that encode the attributes of the desired dose distribution for the PTV and OARs. The graphical interfaces within PARETO provide useful information such as: the convergence behavior during optimization, trade-off plots between the competing objectives, and a graphical representation of the optimal solution database allowing for the rapid exploration of treatment plan quality through the evaluation of dose-volume histograms and isodose distributions. PARETO was evaluated for two relatively complex clinical cases, a paranasal sinus and a pancreas case. The end result of each PARETO run was a database of optimal (non-dominated) treatment plans that demonstrated trade-offs between the OAR and PTV fitness functions, which were all equally good in the Pareto-optimal sense (where no one objective can be improved without worsening at least one other). Ferret was able to produce high quality solutions even though a large number of parameters, such as beam fluence and beam angles, were included in the optimization.
Impact of database quality in knowledge-based treatment planning for prostate cancer.
Wall, Phillip D H; Carver, Robert L; Fontenot, Jonas D
2018-03-13
This article investigates dose-volume prediction improvements in a common knowledge-based planning (KBP) method using a Pareto plan database compared with using a conventional, clinical plan database. Two plan databases were created using retrospective, anonymized data of 124 volumetric modulated arc therapy (VMAT) prostate cancer patients. The clinical plan database (CPD) contained planning data from each patient's clinically treated VMAT plan, which were manually optimized by various planners. The multicriteria optimization database (MCOD) contained Pareto-optimal plan data from VMAT plans created using a standardized multicriteria optimization protocol. Overlap volume histograms, incorporating fractional organ at risk volumes only within the treatment fields, were computed for each patient and used to match new patient anatomy to similar database patients. For each database patient, CPD and MCOD KBP predictions were generated for D 10 , D 30 , D 50 , D 65 , and D 80 of the bladder and rectum in a leave-one-out manner. Prediction achievability was evaluated through a replanning study on a subset of 31 randomly selected database patients using the best KBP predictions, regardless of plan database origin, as planning goals. MCOD predictions were significantly lower than CPD predictions for all 5 bladder dose-volumes and rectum D 50 (P = .004) and D 65 (P < .001), whereas CPD predictions for rectum D 10 (P = .005) and D 30 (P < .001) were significantly less than MCOD predictions. KBP predictions were statistically achievable in the replans for all predicted dose-volumes, excluding D 10 of bladder (P = .03) and rectum (P = .04). Compared with clinical plans, replans showed significant average reductions in D mean for bladder (7.8 Gy; P < .001) and rectum (9.4 Gy; P < .001), while maintaining statistically similar planning target volume, femoral head, and penile bulb dose. KBP dose-volume predictions derived from Pareto plans were more optimal overall than those resulting from manually optimized clinical plans, which significantly improved KBP-assisted plan quality. This work investigates how the plan quality of knowledge databases affects the performance and achievability of dose-volume predictions from a common knowledge-based planning approach for prostate cancer. Bladder and rectum dose-volume predictions derived from a database of standardized Pareto-optimal plans were compared with those derived from clinical plans manually designed by various planners. Dose-volume predictions from the Pareto plan database were significantly lower overall than those from the clinical plan database, without compromising achievability. Copyright © 2018 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hosini, M; GALAL, M; Emam, I
2014-06-01
Purpose: To investigate the planning and dosimetric advantages of direct aperture optimization (DAO) over beam-let optimization in IMRT treatment of head and neck (H/N) and prostate cancers. Methods: Five Head and Neck as well as five prostate patients were planned using the beamlet optimizer in Elekta-Xio ver 4.6 IMRT treatment planning system. Based on our experience in beamlet IMRT optimization, PTVs in H/N plans were prescribed to 70 Gy delivered by 7 fields. While prostate PTVs were prescribed to 76 Gy with 9 fields. In all plans, fields were set to be equally spaced. All cases were re-planed using Directmore » Aperture optimizer in Prowess Panther ver 5.01 IMRT planning system at same configurations and dose constraints. Plans were evaluated according to ICRU criteria, number of segments, number of monitor units and planning time. Results: For H/N plans, the near maximum dose (D2) and the dose that covers 95% D95 of PTV has improved by 4% in DAO. For organs at risk (OAR), DAO reduced the volume covered by 30% (V30) in spinal cord, right parotid, and left parotid by 60%, 54%, and 53% respectively. This considerable dosimetric quality improvement achieved using 25% less planning time and lower number of segments and monitor units by 46% and 51% respectively. In DAO prostate plans, Both D2 and D95 for the PTV were improved by only 2%. The V30 of the right femur, left femur and bladder were improved by 35%, 15% and 3% respectively. On the contrary, the rectum V30 got even worse by 9%. However, number of monitor units, and number of segments decreased by 20% and 25% respectively. Moreover the planning time reduced significantly too. Conclusion: DAO introduces considerable advantages over the beamlet optimization in regards to organs at risk sparing. However, no significant improvement occurred in most studied PTVs.« less
Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2015-04-07
Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation time including both MC dose calculations and plan optimizations was reduced by a factor of 4.4, from 494 to 113 s, using only one GPU card.
Hybrid Robust Multi-Objective Evolutionary Optimization Algorithm
2009-03-10
pp. 594-606. 8. Inverse Approaches to Drying of Thin Bodies With Significant Shrinkage Effects (with G. H. Kanevce, L. P. Kanevce, V. B. Mitrevski ...Kanevce, L. Kanevce, V. Mitrevski ), ICCES: International Conference on Computational & Experimental Engineering and Sciences, Honolulu, Hawaii, March 17...Miami Beach, FL, April 16-18, 2007. 16. Inverse Approaches to Drying of Sliced Foods (with Kanevce, G. H., Kanevce, Lj. P., and Mitrevski , V. B
a Novel Discrete Optimal Transport Method for Bayesian Inverse Problems
NASA Astrophysics Data System (ADS)
Bui-Thanh, T.; Myers, A.; Wang, K.; Thiery, A.
2017-12-01
We present the Augmented Ensemble Transform (AET) method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesian inverse problems. Solving large-scale inverse problems is critical for some of the most relevant and impactful scientific endeavors of our time. Therefore, constructing novel methods for solving the Bayesian inverse problem in more computationally efficient ways can have a profound impact on the science community. This research derives the novel AET method for exploring a posterior by solving a sequence of linear programming problems, resulting in a series of transport maps which map prior samples to posterior samples, allowing for the computation of moments of the posterior. We show both theoretical and numerical results, indicating this method can offer superior computational efficiency when compared to other SMC methods. Most of this efficiency is derived from matrix scaling methods to solve the linear programming problem and derivative-free optimization for particle movement. We use this method to determine inter-well connectivity in a reservoir and the associated uncertainty related to certain parameters. The attached file shows the difference between the true parameter and the AET parameter in an example 3D reservoir problem. The error is within the Morozov discrepancy allowance with lower computational cost than other particle methods.
Panchapagesan, Sankaran; Alwan, Abeer
2011-01-01
In this paper, a quantitative study of acoustic-to-articulatory inversion for vowel speech sounds by analysis-by-synthesis using the Maeda articulatory model is performed. For chain matrix calculation of vocal tract (VT) acoustics, the chain matrix derivatives with respect to area function are calculated and used in a quasi-Newton method for optimizing articulatory trajectories. The cost function includes a distance measure between natural and synthesized first three formants, and parameter regularization and continuity terms. Calibration of the Maeda model to two speakers, one male and one female, from the University of Wisconsin x-ray microbeam (XRMB) database, using a cost function, is discussed. Model adaptation includes scaling the overall VT and the pharyngeal region and modifying the outer VT outline using measured palate and pharyngeal traces. The inversion optimization is initialized by a fast search of an articulatory codebook, which was pruned using XRMB data to improve inversion results. Good agreement between estimated midsagittal VT outlines and measured XRMB tongue pellet positions was achieved for several vowels and diphthongs for the male speaker, with average pellet-VT outline distances around 0.15 cm, smooth articulatory trajectories, and less than 1% average error in the first three formants. PMID:21476670
Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...
2015-06-12
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, F; Tian, Z; Jia, X
Purpose: In treatment plan optimization for Intensity Modulated Radiation Therapy (IMRT), after a plan is initially developed by a dosimetrist, the attending physician evaluates its quality and often would like to improve it. As opposed to having the dosimetrist implement the improvements, it is desirable to have the physician directly and efficiently modify the plan for a more streamlined and effective workflow. In this project, we developed an interactive optimization system for physicians to conveniently and efficiently fine-tune iso-dose curves. Methods: An interactive interface is developed under C++/Qt. The physician first examines iso-dose lines. S/he then picks an iso-dose curvemore » to be improved and drags it to a more desired configuration using a computer mouse or touchpad. Once the mouse is released, a voxel-based optimization engine is launched. The weighting factors corresponding to voxels between the iso-dose lines before and after the dragging are modified. The underlying algorithm then takes these factors as input to re-optimize the plan in near real-time on a GPU platform, yielding a new plan best matching the physician's desire. The re-optimized DVHs and iso-dose curves are then updated for the next iteration of modifications. This process is repeated until a physician satisfactory plan is achieved. Results: We have tested this system for a series of IMRT plans. Results indicate that our system provides the physicians an intuitive and efficient tool to edit the iso-dose curves according to their preference. The input information is used to guide plan re-optimization, which is achieved in near real-time using our GPU-based optimization engine. Typically, a satisfactory plan can be developed by a physician in a few minutes using this tool. Conclusion: With our system, physicians are able to manipulate iso-dose curves according to their preferences. Preliminary results demonstrate the feasibility and effectiveness of this tool.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiu, J; Ma, L
2015-06-15
Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beammore » numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume.« less
NASA Astrophysics Data System (ADS)
Cui, Y.; Brioude, J. F.; Angevine, W. M.; McKeen, S. A.; Henze, D. K.; Bousserez, N.; Liu, Z.; McDonald, B.; Peischl, J.; Ryerson, T. B.; Frost, G. J.; Trainer, M.
2016-12-01
Production of unconventional natural gas grew rapidly during the past ten years in the US which led to an increase in emissions of methane (CH4) and, depending on the shale region, nitrogen oxides (NOx). In terms of radiative forcing, CH4 is the second most important greenhouse gas after CO2. NOx is a precursor of ozone (O3) in the troposphere and nitrate particles, both of which are regulated by the US Clean Air Act. Emission estimates of CH4 and NOx from the shale regions are still highly uncertain. We present top-down estimates of CH4 and NOx surface fluxes from the Haynesville and Fayetteville shale production regions using aircraft data collected during the Southeast Nexus of Climate Change and Air Quality (SENEX) field campaign (June-July, 2013) and the Shale Oil and Natural Gas Nexus (SONGNEX) field campaign (March-May, 2015) within a mesoscale inversion framework. The inversion method is based on a mesoscale Bayesian inversion system using multiple transport models. EPA's 2011 National CH4 and NOx Emission Inventories are used as prior information to optimize CH4 and NOx emissions. Furthermore, the posterior CH4 emission estimates are used to constrain NOx emission estimates using a flux ratio inversion technique. Sensitivity of the posterior estimates to the use of off-diagonal terms in the error covariance matrices, the transport models, and prior estimates is discussed. Compared to the ground-based in-situ observations, the optimized CH4 and NOx inventories improve ground level CH4 and O3 concentrations calculated by the Weather Research and Forecasting mesoscale model coupled with chemistry (WRF-Chem).
On the ability of a global atmospheric inversion to constrain variations of CO2 fluxes over Amazonia
NASA Astrophysics Data System (ADS)
Molina, L.; Broquet, G.; Imbach, P.; Chevallier, F.; Poulter, B.; Bonal, D.; Burban, B.; Ramonet, M.; Gatti, L. V.; Wofsy, S. C.; Munger, J. W.; Dlugokencky, E.; Ciais, P.
2015-07-01
The exchanges of carbon, water and energy between the atmosphere and the Amazon basin have global implications for the current and future climate. Here, the global atmospheric inversion system of the Monitoring of Atmospheric Composition and Climate (MACC) service is used to study the seasonal and interannual variations of biogenic CO2 fluxes in Amazonia during the period 2002-2010. The system assimilated surface measurements of atmospheric CO2 mole fractions made at more than 100 sites over the globe into an atmospheric transport model. The present study adds measurements from four surface stations located in tropical South America, a region poorly covered by CO2 observations. The estimates of net ecosystem exchange (NEE) optimized by the inversion are compared to an independent estimate of NEE upscaled from eddy-covariance flux measurements in Amazonia. They are also qualitatively evaluated against reports on the seasonal and interannual variations of the land sink in South America from the scientific literature. We attempt at assessing the impact on NEE of the strong droughts in 2005 and 2010 (due to severe and longer-than-usual dry seasons) and the extreme rainfall conditions registered in 2009. The spatial variations of the seasonal and interannual variability of optimized NEE are also investigated. While the inversion supports the assumption of strong spatial heterogeneity of these variations, the results reveal critical limitations of the coarse-resolution transport model, the surface observation network in South America during the recent years and the present knowledge of modelling uncertainties in South America that prevent our inversion from capturing the seasonal patterns of fluxes across Amazonia. However, some patterns from the inversion seem consistent with the anomaly of moisture conditions in 2009.
On the ability of a global atmospheric inversion to constrain variations of CO2 fluxes over Amazonia
NASA Astrophysics Data System (ADS)
Molina, L.; Broquet, G.; Imbach, P.; Chevallier, F.; Poulter, B.; Bonal, D.; Burban, B.; Ramonet, M.; Gatti, L. V.; Wofsy, S. C.; Munger, J. W.; Dlugokencky, E.; Ciais, P.
2015-01-01
The exchanges of carbon, water, and energy between the atmosphere and the Amazon Basin have global implications for current and future climate. Here, the global atmospheric inversion system of the Monitoring of Atmospheric Composition and Climate service (MACC) was used to further study the seasonal and interannual variations of biogenic CO2 fluxes in Amazonia. The system assimilated surface measurements of atmospheric CO2 mole fractions made over more than 100 sites over the globe into an atmospheric transport model. This study added four surface stations located in tropical South America, a region poorly covered by CO2 observations. The estimates of net ecosystem exchange (NEE) optimized by the inversion were compared to independent estimates of NEE upscaled from eddy-covariance flux measurements in Amazonia, and against reports on the seasonal and interannual variations of the land sink in South America from the scientific literature. We focused on the impact of the interannual variation of the strong droughts in 2005 and 2010 (due to severe and longer-than-usual dry seasons), and of the extreme rainfall conditions registered in 2009. The spatial variations of the seasonal and interannual variability of optimized NEE were also investigated. While the inversion supported the assumption of strong spatial heterogeneity of these variations, the results revealed critical limitations that prevent global inversion frameworks from capturing the data-driven seasonal patterns of fluxes across Amazonia. In particular, it highlighted issues due to the configuration of the observation network in South America and the lack of continuity of the measurements. However, some robust patterns from the inversion seemed consistent with the abnormal moisture conditions in 2009.
Zhang, Bo; Duan, Haibin
2017-01-01
Three-dimension path planning of uninhabited combat aerial vehicle (UCAV) is a complicated optimal problem, which mainly focused on optimizing the flight route considering the different types of constrains under complex combating environment. A novel predator-prey pigeon-inspired optimization (PPPIO) is proposed to solve the UCAV three-dimension path planning problem in dynamic environment. Pigeon-inspired optimization (PIO) is a new bio-inspired optimization algorithm. In this algorithm, map and compass operator model and landmark operator model are used to search the best result of a function. The prey-predator concept is adopted to improve global best properties and enhance the convergence speed. The characteristics of the optimal path are presented in the form of a cost function. The comparative simulation results show that our proposed PPPIO algorithm is more efficient than the basic PIO, particle swarm optimization (PSO), and different evolution (DE) in solving UCAV three-dimensional path planning problems.
Diagnostics for the optimization of an 11 keV inverse Compton scattering x-ray source
NASA Astrophysics Data System (ADS)
Chauchat, A.-S.; Brasile, J.-P.; Le Flanchec, V.; Nègre, J.-P.; Binet, A.; Ortega, J.-M.
2013-04-01
In a scope of a collaboration between Thales Communications & Security and CEA DAM DIF, 11 keV Xrays were produced by inverse Compton scattering on the ELSA facility. In this type of experiment, X-ray observation lies in the use of accurate electron and laser beam interaction diagnostics and on fitted X-ray detectors. The low interaction probability between < 100 μm width, 12 ps [rms] length electron and photon pulses requires careful optimization of pulse spatial and temporal covering. Another issue was to observe 11 keV X-rays in the ambient radioactive noise of the linear accelerator. For that, we use a very sensitive detection scheme based on radio luminescent screens.
NASA Astrophysics Data System (ADS)
Shirazi, Abolfazl
2016-10-01
This article introduces a new method to optimize finite-burn orbital manoeuvres based on a modified evolutionary algorithm. Optimization is carried out based on conversion of the orbital manoeuvre into a parameter optimization problem by assigning inverse tangential functions to the changes in direction angles of the thrust vector. The problem is analysed using boundary delimitation in a common optimization algorithm. A method is introduced to achieve acceptable values for optimization variables using nonlinear simulation, which results in an enlarged convergence domain. The presented algorithm benefits from high optimality and fast convergence time. A numerical example of a three-dimensional optimal orbital transfer is presented and the accuracy of the proposed algorithm is shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, J; Jiang, R; Kiciak, A
2016-06-15
Purpose: This study compared the rectal dose-volume consistency, equivalent uniform dose (EUD) and normal tissue complication probability (NTCP) in prostate intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT). Methods: For forty prostate IMRT and fifty VMAT patients treated using the same dose prescription (78 Gy/39 fraction) and dose-volume criteria in inverse planning optimization, the rectal EUD and NTCP were calculated for each patient. The rectal dose-volume consistency, showing the variability of dose-volume histogram (DVH) among patients, was defined and calculated based on the deviation between the mean and corresponding rectal DVH. Results: From both the prostate IMRT andmore » VMAT plans, the rectal EUD and NTCP were found decreasing with the rectal volume. The decrease rates for the IMRT plans (EUD = 0.47 × 10{sup −3} Gy cm{sup −3} and NTCP = 3.94 × 10{sup −2} % cm{sup −3}) were higher than those for the VMAT (EUD = 0.28 × 10{sup −3} Gy cm{sup −3} and NTCP = 2.61 × 10{sup −2} % cm{sup −3}). In addition, the dependences of the rectal EUD and NTCP on the dose-volume consistency were found very similar between the prostate IMRT and VMAT plans. This shows that both delivery techniques have similar variations of the rectal EUD and NTCP on the dose-volume consistency. Conclusion: Dependences of the dose-volume consistency on the rectal EUD and NTCP were compared between the prostate IMRT and VMAT plans. It is concluded that both rectal EUD and NTCP decreased with an increase of the rectal volume. The variation rates of the rectal EUD and NTCP on the rectal volume were higher for the IMRT plans than VMAT. However, variations of the rectal dose-volume consistency on the rectal EUD and NTCP were found not significant for both delivery techniques.« less
Social Planning for Small Cities.
ERIC Educational Resources Information Center
Meyers, James
Derived mainly from publications by the League of California Cities, this guide to social planning for small cities presents the following: (1) social planning definitions; (2) a checklist of social planning concerns (provision for: adequate income and economic opportunity; optimal environmental conditions for basic material needs; optimal health…